venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
ICLR
Title Human Perception-based Evaluation Criterion for Ultra-high Resolution Cell Membrane Segmentation Abstract Computer vision technology is widely used in biological and medical data analysis and understanding. However, there are still two major bottlenecks in the field of cell membrane segmentation, which seriously hinder further research: lack of sufficient high-quality data and lack of suitable evaluation criteria. In order to solve these two problems, this paper first introduces an Ultra-high Resolution Image Segmentation dataset for the Cell membrane, called U-RISC, the largest annotated Electron Microscopy (EM) dataset for the Cell membrane with multiple iterative annotations and uncompressed high-resolution raw data. During the analysis process of the U-RISC, we found that the current popular segmentation evaluation criteria are inconsistent with human perception. This interesting phenomenon is confirmed by a subjective experiment involving twenty people. Furthermore, to resolve this inconsistency, we propose a new evaluation criterion called Perceptual Hausdorff Distance (PHD) to measure the quality of cell membrane segmentation results. Detailed performance comparison and discussion of classic segmentation methods along with two iterative manual annotation results under existing evaluation criteria and PHD is given. 1 INTRODUCTION Electron Microscopy (EM) is a powerful tool to explore ultra-fine structures in biological tissues, which has been widely used in the research areas of medicine and biology ( ERLANDSON (2009); Curry et al. (2006); Harris et al. (2006)). In recent years, EM techniques have pioneered an emerging field called “Connectomics” (Lichtman et al. (2014)), which aims to scan and reconstruct the whole brain circuitry at the nanoscale. “Connectomics” has played a key role in several ambitious projects, including the BRAIN Initiative ( Insel et al. (2013)) and MICrONS ( Gleeson & Sawyer (2018)) in the U.S., Brain/MINDS in Japan ( Dando (2020)), and the China Brain Project ( Poo et al. (2016)). Because EM scans brain slices at the nanoscale, it produces massive images with ultra-high resolution and inevitably leads to the explosion of data. However, compared to the advances of EM, techniques of data analysis fall far behind. In particular, how to automatically extract information from massive raw data to reconstruct the circuitry map has growingly become the bottleneck of EM applications. One critical step in automatic EM data analysis is Membrane segmentation. With the introduction of deep learning techniques, significant improvements have been achieved in several public available EM datasets ISBI 2012 and SNEMI3D ( ISBI 2012 (2012); ISBI 2013 (2013); Arganda-Carreras et al. (2015b); Lee et al. (2017)). One of the earliest works ( Ciresan et al. (2012) used a succession of max-pooling convolutional networks as a pixel classifier, which estimated the probability of a pixel is a membrane. Ronneberger et al. (2015) presented a U-Net structure with contracting paths, which captures multi-contextual information. Fully convolutional networks (FCNs) proposed by Long et al. (2015) led to a breakthrough in semantic segmentation. Follow-up works based on Unet and FCN structure ( Xie & Tu (2015); Drozdzal et al. (2016); Hu et al. (2018); Zhou et al. (2018); Chaurasia & Culurciello (2017); Yu et al. (2017); Chen et al. (2019b)) have also achieved outstanding results near-human performance. Despite much progress that has been made in cell membrane segmentation for EM data thanks to deep learning, one risk to these popular and classic methods is that they might be “saturated” at the current datasets as their performance appear to be “exceedingly accurate” ( Lee et al. (2017)). How can these classic deep learning based segmentation methods work on new EM datasets with higher resolution and perhaps more challenges? Moreover, how robust of these methods when they are compared with human performance on such EM images? To expand the research of membrane segmentation on more comprehensive EM data, we first established a dataset “U-RISC” containing images with original resolution (10000 × 10000 pixels, Fig. 1). To ensure the quality of annotation, it also costs us over 10,000 labor hours to label and double-check the data. To the best of our knowledge, U-RISC is the largest uncompressed annotated and EM dataset today. Next, we tested several classic deep learning based segmentation methods on U-RISC and compared the results to human performance. We found that the performance of these methods was much lower than that of the first annotation. To understand why human perception is better than the popular segmentation methods, we examined in detail the Membrane segmentation results by these popular segmentation methods. How to measure the similarity between two image segmentation results has been widely discussed ( Yeghiazaryan & Voiculescu (2018); Niessen et al. (2000); Veltkamp & Hagedoorn (2000); Lee et al. (2017)). Varduhi Yeghiazaryan ( Yeghiazaryan & Voiculescu (2018)) discussed the family of boundary overlap metrics for the evaluation of medical image segmentation. Veltkamp. etc ( Veltkamp & Hagedoorn (2000)) formulated and summed up the similarity measures in a more general condition. In some challenges, such as ISBI2012 ( Arganda-Carreras et al. (2015a)), they also considered multiple metrics like Rand score on both original images and thinned images. However, we found there was a certain inconsistency between current most popular evaluation criteria for segmentation(e.g. F1 score, IoU) and human perception: while some figures were rated significantly lower in F1 score or IoU, they were “perceived” better by humans (Fig. 4). Such inconsistency motivated us to propose a human-perception based criterion, Perceptual Hausdorff Distance (PHD) to evaluate image qualities. Further, we set up a subjective experiment to collect human perception about membrane segmentation, and we found the PHD criteria is more consistent with human choices than traditional evaluation criteria. Finally, we found the current popular and classical segmentation methods need to be revisited with PHD criteria. Overall, our contribution in this work lies mainly in the following two parts: (1) we established the largest, original image resolution-based EM dataset for training and testing; (2) we proposed a human-perception based evaluation criterion, PHD, and verified the superiority of PHD by subjective experiments. The dataset we contributed and the PHD criterion we proposed may help researchers to gain insights into the difference between human perception and conventional evaluation criteria, thus motivate the further design of the segmentation method to catch up with the human performance on original EM images. 2 U-RISC: ULTRA-HIGH RESOLUTION IMAGE SEGMENTATION DATASET FOR CELL MEMBRANE Supervised learning methods rely heavily on high-quality datasets. To alleviate the lack of training data for cell membrane segmentation, we proposed an Ultra-high Resolution Image Segmentation dataset for Cell membrane, called U-RISC. The dataset was annotated upon RC1, a large scale retinal serial section transmission electron microscopic (ssTEM) dataset, publically available upon request and described in the work of Anderson et al. (2011). The original RC1 dataset is a 0.25mm diameter, 370 TEM slices volume, spanning the inner nuclear, inner plexiform, and ganglion cell layers, acquired at 2.18 nm/pixel across both axes and 70nm thickness in z-axis. From the 370 serial-section volume, we clipped out 120 images in the size of 10000 ×10000 pixels from randomly chosen sections. Then, we manually annotated the cell membranes in an iterative annotation-correction procedure. Since the human labeling process is very valuable for uncovering the human learning process, during the relabeling process, we reserved the intermediate results for public release. The U-RISC dataset will be released on https://Anonymous.com on acceptance. 2.1 COMPARISON WITH OTHER DATASETS ISBI 2012 (Cardona et al. (2010)) published a set of 30 images for training, which were captured from the ventral nerve cord of a Drosophila first instar larva at a resolution of 4×4×50 nm/pixel through ssTEM (Arganda-Carreras et al. (2015b); ISBI 2012 (2012)). Each image contains 512×512 pixels, spanning a realistic area of 2×2 µm approximately. In the challenge of SNEMI3D (Kasthuri et al. (2015); ISBI 2013 (2013)), the training data is a 3D stack of 100 images in the size of 1024×1024 pixels with the voxel resolution of 6×6×29 nm/pixel. The raw images were acquired at the resolution of 3×3×29 nm/pixel using serial section scanning electron microscopy (ssSEM) from mouse somatosensory cortex (Kasthuri et al. (2015); ISBI 2013 (2013)). U-RISC contains 120 pieces of annotated images (10000×10000 pixels) at the resolution of 2.18×2.18×70 nm/pixel from rabbit retina. Due to the difference of species and tissue, U-RISC can fill in the blank of annotated vertebrate retinal segmentation dataset. Besides that, U-RISC has some other characteristics which can be focused on in the future segmentation study. The first one is that the image size and realistic size of U-RISC is much larger, specifically, the image size of U-RISC is 400 and 100 times of ISBI2012 and SNEMI3D respectively, and the realistic size is 100 and 9 times of them respectively (Fig. 1 (c)), which can be applied in developing deep learning based segmentation methods according to various demands. And along with the iterative annotation procedure U-RISC actually contains 3 sets of annotation results with increasing accuracy, which could serve as ground truth at different level standard. And the total number of annotated images is 12 and 3.6 times of the public annotated images of ISBI2012 and SNEMI3D respectively (Fig. 1 (d)). An example of the image with its label is shown in the Supplementary. Due to the limitation of the size of the supplementary material, we only uploaded a quarter (5000 ×5000 pixels) size of the original image with its label. 2.2 TRIPLE LABELING PROCESS The character of high resolution in TEM image can display a much more detailed sub-cellular structure, which requests more patience to label out the cell (Fig. 2(a)). Besides, the imaging quality can be affected by many factors, such as section thickness or sample staining (Fig. 2(b)). And low imaging quality also requests more labeling efforts. Therefore, increasing labeling efforts is essential to completely annotate U-RISC. To guarantee the labeling accuracy, we set up an iterative correction mechanism in the labeling process (Fig. 3). Before starting the annotation, labeling rules were introduced to all annotators. 58 qualified annotators were allowed to participate in the final labeling process. After the first round annotation, 5 experienced lab staff with sufficient background knowledge were responsible to point out labeling errors pixel by pixel during the second and the third rounds of annotation. Finally, the third round annotation results were regarded as the final “ground truth”. And previous two rounds of manual annotations are also saved for later analysis. Fig. 3 shows an example of the two inspection processes. We can see that there are quiet a few mislabeled and missed labeled cell membranes in each round. Therefore, the iterative correction mechanism is very necessary. 3 PERCEPTION-BASED EVALUATION In the analysis of EM data, membrane segmentation is generally an indispensable key step. However, in the field of cell membrane segmentation, most of the previous studies, such as Zhou et al. (2018); Chaurasia & Culurciello (2017); Drozdzal et al. (2016), were not specifically designed for high resolution datasets such as U-RISC. In addition, although many researchers discussed various evaluation criteria for medical and general tasks, few researchers actually incorporate them into the design of the architectures of cell membrane segmentation methods. By comparing the segmentation results of the popular and classic segmentation methods, we found that the widely used evaluation criteria of segmentation were inconsistent with human perception in some cases, which is further discussed through the perceptual consistency experiment (details in Sec. 3.2). To address this issue, we proposed a new evaluation criterion called Perceptual Hausdorff Distance (PHD). The experimental results showed that it was more consistent with human perception. 3.1 INCONSISTENCY BETWEEN EXISTING EVALUATION CRITERIA AND PERCEPTION Many researchers have proposed various metrics for segmentation evaluation ( Yeghiazaryan & Voiculescu (2018); Niessen et al. (2000); Veltkamp & Hagedoorn (2000); Arganda-Carreras et al. (2015b); Lee et al. (2017)). Some of them, which are the most popular, such as F1 score, Dice Coefficient and IoU ( Sasaki et al. (2007); Dice (1945); Kosub (2019)) are used as the evaluations in most segmentation methods ( Ronneberger et al. (2015); Zhou et al. (2018); Chaurasia & Culurciello (2017); Yu et al. (2017); Chen et al. (2019b)). ISBI2012 cell segmentation challenge used Rand scores (V-Rand and V-Info) ( Arganda-Carreras et al. (2015b)) on thinned membrane for evaluation. Recently, researchers made discussions on various boundary overlap metrics for the evaluation of medical image segmentation (Yeghiazaryan & Voiculescu (2018)). The most popular evaluation criteria, such as F1 score, are based on the statistics of the degree to which pixels are classified correctly. There are also some metrics designed based on point set distance, such as ASSD ( Yeghiazaryan & Voiculescu (2018)), which is not widely used in recent deep learning researches. However, quality of segmentation should be judged with respect to the ultimate goal. When we need to use segmentation to reconstruct the whole structure of membranes and connect them, such statistics may not be consistent with human perception in cell membrane segmentation tasks. In the process of segmentation experiments, some interesting phenomena were found. Fig. 4 shows an example of the original image with its manual annotation and segmentation results by two methods GLNet ( Chen et al. (2019b)) and U-Net ( Ronneberger et al. (2015)). The scores indicated that (d) was more similar to (b) than (c). It should be noted that if these segmentation results are used for reconstructing the structure of cells, the mistakes and loss of structure will be more noticeable when subjects inspect the area surrounded by the red dashed lines in the images, Therefore we consider that (c) is a better prediction, because (d) misses some edges. The reason for the three scores of (c) are lower was that the predicted cell membrane of (c) was thicker than manual labeling. Therefore, it can be inferred that the existing evaluation criteria might not sufficiently robust to variations in the thickness and structures of the membrane, and the evaluation result was more likely inconsistent with human perception. 3.2 PERCEPTUAL CONSISTENCY EXPERIMENTS In order to verify the above conjecture, a subjective experiment was designed to explore the consistency with the existing evaluation criteria and human subjective perception. Six popular and classical segmentation methods were used to generate cell membrane segmentation results on URISC: U-net ( Ronneberger et al. (2015)), LinkNet( Chaurasia & Culurciello (2017)), CASENet ( Yu et al. (2017)), SENet ( Hu et al. (2018)), U-Net++ ( Zhou et al. (2018)),and GLNet ( Chen et al. (2019b)). Using these segmentation results, 200 groups of images were randomly selected. Each group contained 3 images: the final manual annotation (ground truth) and two automatically generated segmentation results for the same input cell image. 20 subjects were recruited to participate in the experiments. They had either a biological background or experience in cell membrane segmentation and reconstruction. For each group, each of the 20 subjects had three choices. If the subject can tell which segmentation result is more similar to the ground truth, he or she can choose which one. Otherwise, the subject can choose “Difficult to choose”. The experiment interface is shown in the Appendix I. Before the experiment, the subjects were trained on the purpose and source of the images. During the experiment, 200 groups of images were divided into four groups on average in order to prevent the subjects from choosing randomly due to fatigue. For each batch of groups, the subjects needed to complete the judgment continuously without interruption. After the experiment, for each group, if there were more than 10 votes of the same number, it was called a valid group. Otherwise, it was invalid and discarded. There were a total of 113 valid groups. Then, based on these valid groups, the consistency of the F1 score, IoU, and Dice with human choices was calculated. According to our experimental results, the consistency of F1 score, IoU, and Dice with human choice was only 34.51%, 35.40%, and 34.51%, respectively. Therefore, it can be inferred that the three criteria are not consistent with human subjective perception in most cases. More results and design of subjective experiments are shown in the Appendix II, and IV,. 3.3 PERCEPTUAL HAUSDORFF DISTANCE Based on the subjective experimental results, it was verified that the widely used evaluation criteria for general segmentation were inconsistent with human perception of cell membrane segmentation. This paper proposes a new evaluation standard based on human perception, namely, Perceptual Hausdorff Distance (PHD for short), considering the structure but ignoring the thickness of cell membrane. An Overview of PHD. As Fig. 4 shows, from the perspective of neuronal reconstruction, the thickness of the cell membrane is not the key for evaluation. In fact, when the goal is to reconstruct the structure of cells, humans will pay more attention on structure changes, instead of thickness changes. Hence, when measuring the similarity of two cell membrane segmentation results, in order to eliminate the influence of thickness, the segmentation results of two cell membranes were skeletonized, and then the distance between two skeletons was calculated to measure the difference. Since the skeleton is a collection of different points, and Hausdorff distance is a common distance to calculate the difference between two sets of points, the proposed PHD is built upon Hausdorff distance. On the other hand, through subjective experiments, it was found that people tend to ignore the slight offset between the membrane. Therefore, based on the above two considerations, the Perceptual Hausdorff Distance (PHD) based on Hausdorff distance( Huttenlocher et al. (1993); Aspert et al. (2002); Rachasingho & Tasena (2020)) with modification was designed. Fig. 5 shows the overview of PHD. The details are as follows. Step 2. Calculate the distance between skeletons. Hausdorff distance is a common distance used to calculate the difference between two point sets. Consider two unordered nonempty sets of points X and Y and the Euclidean distance d(x,y) between two point sets. The Hausdorff distance between X and Y is defined as dH(X,Y) = max { dX,Y, dY,X } = max { max x ∈ X { min y ∈ Y d(x,y)}, max y ∈ Y { min x ∈ X d(x,y)} } , (1) which can be understood as the maximum value of the shortest distance from a point set to another point set. It is easy to prove that the Hausdorff distance is a metric ( Choi (2019)). In the task of cell membrane segmentation, we should pay attention to the global distance between two point sets, while Hausdorff distance is sensitive to outliers in two point sets. Therefore, the average distance of the two point sets is obtained naturally by using the average operation instead of all the max operations. Furthermore, it was found that people have tolerance for the small offset between segmentation results. Specifically, if the distance between two points is very small, people tend to ignore it. Therefore, a concept called Tolerance Distance t is defined, which represents human tolerance for small errors. The Perceptual Hausdorff Distance (PHD) is defined as Eq. 2. dPHD(X,Y) = 1 |X| ∑ x ∈ X min y ∈ Y d∗(x,y) + 1 |Y| ∑ y ∈ Y min x ∈ X d∗(x,y), (2) d∗(x,y) = {‖x− y‖, ‖x− y‖ > t 0, ‖x− y‖ ≤ t (3) To intuitively understand the influence of tolerance distance in PHD, toy cases (a) and (b) as shown in Fig. 5 are taken as examples. In case (a), the blue skeleton scored 19 points while the orange one scored 18 points. Two skeletons are close in the Euclidean Space but do not coincide. Among all the Euclidean distance d(x,y) of x ∈ X and y ∈ Y, the max distance is 2 pixels, and the most common distance is 1. When t = 0, which means no mistake can be tolerated, and the PHD is high. If t = 1, the PHD value drops a lot. When the t = 2, PHD becomes 0. In case (b), there is a large offset between two skeletons. When the t is set to [2, 4], the decline of PHD value is slow. When t = 6, it drops to 0, which is the max distance between two point sets of skeletons. The Different settings of t represent the degree of tolerance to the distance between the two skeletons. In practical applications, different tolerance distances can be adopted according to different situations. Consistency between PHD and human perception. The consistency with human perception of PHD and existing related criteria (TPVF, TNVF, Prec, RVD, Hausdorff, ASSD, V-Rand, and V-Info) based on the subjective experimental results were also calculated (as described in Section 3.2, the formulas can be found in Appendix V). The result showed that compared with other criteria, PHD with appropriate tolerance distance was more consistent with human perception. As shown in Fig. 6, while tolerance distance t of PHD increasing from 0 to 800, PHD’s consistency to human perception rose first and then dropped slowly to 0, suggesting human vision does have tolerance for certain offset. Specifically, the maximum value can be reached at 65.48%, when tolerance distance t was set to 3, suggesting that our perceptions prefer to tolerant small perturbations. It was worthy to note that the optimal PHD score (65.48%) was nearly double of the consistency scores obtained by pixel-error based metrics, such as F1 score. Our experiment shows that most of these compared criteria in color bars can be improved by skeletonizing the segmentation results before evaluation to a certain extent. In Fig. 6, the consistency of these criteria with human perception calculated based on original images can only reach about 30%, while they can improve about 10% on skeletons. Even the best of these metrics (ASSD) on skeletons can only achieve 52.43%, which is significantly lower than the score of PHD performance with t = 3. Therefore, it can be concluded that the PHD performs more consistently with human. 4 RE-EXAMINING PHD ON CLASSIC DEEP LEARNING BASED SEGMENTATION METHODS WITH U-RISC In the previous two sections, we proposed a new ultra-high resolution cell membrane segmentation dataset U-RISC and a new perceptual criteria PHD to help solve the two bottlenecks in the field of cell membrane segmentation. The subjective experiment on a small-scale dataset demonstrated that PHD is more consistent with human perception for the evaluation of cell membrane segmentation than some widely used criteria. In order to understand the performance of deep learning methods on the U-RISC dataset, we conducted an in depth investigation on U-RISC with representative deep learning based segmentation methods and different evaluation criteria. To be specific, we chose 6 representative algorithms ( U-net (Ronneberger et al. (2015)), LinkNet(Chaurasia & Culurciello (2017)), CASENet (Yu et al. (2017)), SENet (Hu et al. (2018)), U-Net++ (Zhou et al. (2018)),and GLNet (Chen et al. (2019b))) and re-implemented them on U-RISC dataset. Then four evaluation criteria were used to compare the segmentation results: F1 score, IoU, TPVF, TNVF, Prec, RVD, Hausdorff, ASSD, V-Rand, and V-Info, and PHD. As mentioned in Sec. 2, the results of the first two rounds of manual labeling results are retained. Therefore, the results of manual annotation under different evaluation criteria will also be analyzed. Experiment Settings. All the six methods use same training data and testing data to compare the performance. And the parameters and loss functions are same as they are proposed in their references. The parameters for each method and other details are shown in Appendix V. Experiments Results. The experiment results were shown in Table. 1. The table showed the scores of different evaluation criteria on the first two rounds of manual annotation results and six segmentation results with the ground truth. Our first finding was that U-RISC was a challenging dataset in the field of cell membrane segmentation. As shown in Table. 1,the performance of deep learning based methods gained around 0.6 in F1-scores, far below the human level (0.98-0.99) (the first annotation performace) on U-RISC dataset, by contrast, they all exceeded 0.95 on the ISBI 2012. Despite possible improvements by parameter tuning, to such ultra-high resolution images, there was clearly a huge gap between the current popular segmentation methods and human performance. Our second finding was that evaluation rankings for F1-score, IoU, V-Rand-sk, and V-Info-sk were more consistent with each other, but different from PHD-based rankings. Specifically, while PHD tended to choose CASNet, none of the other metrics chose CASNet as the best choice. According to the subjective experimental results of Sec. 3.2, PHD was much closer to human perception. Therefore, the change of ranking led by PHD may also inspire researchers to re-consider the evaluation criteria for cell membrane segmentation algorithms. It also provides a new perspective for promoting the development of segmentation algorithms. Discussion. Based on the results of these six algorithms, it can be seen that LinkNet and CASENet are better than other methods. From the perspective of network design, LinkNet makes full use of the low-level local information and directly connects the low-level encoder to the decoder of corresponding size. This design pays more attention to the capture of local information, which leads to a more accurate local prediction. CASENet takes full account of the continuity of the edge and makes the low-level features strengthen the high-level semantic information by jumping links between the low-level feature and high-level feature, which pays more attention to structural information. Therefore, the design of LinkNet might be preferred by the traditional evaluation criteria, while CASENet might be preferred by the PHD. This also explains why the two methods rank differently under these two types of evaluation criteria. More local segmentation results of different algorithms are shown in the Appendix III. In addition, as an example, we add experiments using U-Net, CASENet, and LinkNet on ISBI2012 and SNMI3D datasets (Appendix VI, VIII, IX). The results in Appendix VI show that U-Net with our chosen parameters can perform close to SOTA on ISBI2012 (ours: V-Rand=0.9689,V-Info=0.9723; SOTA: V-Rand=0.9837, V-Info=0.9878 (on skeleton)) and SNEMI3D (ours:V-Rand=0.9389; SOTA: V-Rand=0.9751 (on skeleton)), although we made little efforts in parameter tuning. However, with the same parameter setting, U-Net gets poor scores (V-Rand=0.5288, V-Info=0.5178) on U-RISC. Such a big gap in its performance between U-RISC and previous datasets suggests the challenge from U-RISC dataset, which hopefully will motivate novel designs of machine learning methods in the future. And the results in Appendix VIII and IX show that evaluation rankings for F1-score, IoU, V-Rand-sk, and V-Info-sk are more consistent with each other, but they are different from PHD-based rankings. 5 DISCUSSION AND CONCLUSION This paper aims to solve the two bottlenecks in the development of cell membrane segmentation. Firstly, we proposed U-RISC,Ultra-high Resolution Image Segmentation dataset for Cell membrane, the largest annotated EM dataset for the Cell membrane so far. To our best knowledge, U-RISC is the only uncompressed annotated EM dataset with multiple iterative annotations and uncompressed high-resolution raw image data. During the analysis process of the U-RISC, we found a certain inconsistency between current evaluation criteria for segmentation (e.g. F1 score, IoU) and human perception. Therefore, this article secondly proposed a human-perception based evaluation criterion, called Perceptual Hausdorff Distance (PHD). Through a subjective experiment on a smallscale dataset, experiments results demonstrated that the new criterion is more consistent with human perception for the evaluation of cell membrane segmentation. In addition, the evaluation criteria of PHD and existing classic deep learning segmentation methods are re-examined. In future research, we will consider how to improve deep learning segmentation methods from the perspective of cell membrane structure and apply PHD criterion for connectomics research. More disccusions are shown in Appendix VII. A APPENDIX I. Fig. 7 is the interface of perceptual consistency experiments. II. Fig. 8 and Fig. 9 are some examples of subjective experiment images. III. Fig. 10 and Fig. 11 are some examples of segmentation results of different algorithms. IV. Experiment Details. 4.1 Subjective Experiment 1) Firstly, the 20 human raters were introduced the value of cell membrane segmentation to connectivity and the importance of structure before testing. And then we used several simple examples to teach them about the experimental process. 2) During the formal experiment, the distribution and selection of data were random. The subjects only need to choose one of the two images that they think is more similar to the ground truth. 3) The 200 groups of images for subjective experiment were randomly selected from the segmentation results produced by the above six methods. Therefore, the training data were from the same dataset as those that were used to create the 200 groups of images that the 20 humans evaluated the results. 4) In order to ensure the continuity of the experiment, each subject was asked to judge each group of images within a specified time (less than 10 minutes). 5) In order to prevent the subjects from fatigue in a long period of experiments, 200 groups of images were distributed to the subjects four times on average. 4.2 Experiments on U-RISC dataset. All the six methods use same training data and testing data to compare the performance. And the parameters and loss functions are same as they are proposed in their references. In the training stage, 60% of the dataset was used as the training data, and then the original image was randomly cut into 1024 × 1024 patches to generate 50,000 training images and 20,000 validation images. Random flipping and clipping were used for data augmentation. Four V100 GPUs were used to train each algorithm. In the testing stage, the original image was cut into the same size of training image, and the patch was tested. These patches were eventually spliced back to the original size for evaluation. V. Formulas of criteria mentioned in the texture. The formulas of metrics we compared are shown in Table 3. The symbols in formulations are explained as follows. - Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of the total amount of relevant instances that were actually retrieved. - TP (true positives), TN (true negatives), FP (false positives), and FN(false negatives) compare the results of the classifier under prediction with ground truth. The terms positive and negative refer to the classifier’s prediction, and the terms true and false refer to whether that prediction corresponds to the ground truth. - X and Y are two point sets. d(x,y) are the points in X and Y respectively. - In V-Rand (Arganda-Carreras et al. (2015b)), suppose that S is the predicted segmentation and T is the ground truth segmentation. Define pi,j as the probability that a randomly chosen pixel belongs to segment i in S and segment j in T . This joint probability distribution satisfies the normalization condition ∑ ij pij = 1. The marginal distribution si = ∑ j pij is the probability that a randomly chosen pixel belongs to segment i in S, and the marginal distribution tj = ∑ i pij is defined similarly. - In V-Info (Arganda-Carreras et al. (2015b)), the mutual information I(S;T ) =∑ ij pij log pij − ∑ i si log si − ∑ j tj log tj is a measure of similarity between S and T . H(S) = − ∑ i si log si is the entropy function. VI. Experiments on ISBI2012 and SNEMI3D with U-Net. (Table. 4) VII. Further Discussion. Although the content of this paper is mainly involved in EM cell segmentation, we think the significance is beyond. According to experimental results, the popular methods do not perform well on our dataset (exceeded 95% on ISBI2012, while about 60% on U-RISC). It shows that U-RISC is a challenging dataset that can promote the development of related machine learning and deep learning methods. The U-RISC dataset may reveal several classic challenges in the field that haven’t been solved: One challenge might be the “imbalance problem of samples” ( Alejo et al. (2016); Li et al. (2010); Zhang et al. (2020)). Due to ultra-high resolution images, the pixels of labeled cell membranes only account for 5.64% of total pixels in training sets, in contrast to 21.96% in ISBS2012 and 33.23% in SNEMI3D. The future design of deep learning methods on U-RISC will have to solve this issue. Some other challenges might include, e.g. ultra high-resolution image segmentation ( Demir et al. (2018); Zhao et al. (2018); Chen et al. (2019a), appropriate loss function design ( Sudre et al. (2017); Spiring (1993); Choromanska et al. (2015)),and the issues related to ”unclosed” edges as suggested by the reviewer 3. Taken together, we strongly believe that the U-RISC dataset will have great contribution in technique novelty, by revealing defects in the existing popular methods and promoting novel algorithms for solving classic challenges in machine learning or deep learning community. In addition, the design of evaluation criteria has been widely concerned in the field of computer science ( Gerl et al. (2020); Lin et al. (2015); Liu et al. (2018)). The PHD we proposed may inspire researchers from a new perspective and further promote the developments of algorithms. The technical novelties of the PHD metric lie in many aspects. To list a few, (1) It can be potentially used in other tasks, such as vascular segmentation ( Gerl et al. (2020)), bone segmentation ( Lin et al. (2015)), edge detection ( Liu et al. (2018)), and other tasks related to structural and shape information. For example, ( Gerl et al. (2020)) successfully used a distance-based criterion to improve skin layer segmentation in optoacoustic images. (2) It can be modified into loss functions which is also part of our on-going work. It is worthy to note that some works have successfully integrated Hausdorff distance into the loss function ( Genovese et al. (2012); Karimi & Salcudean (2019); Ribera et al. (2019). VIII. Experiments on ISBI2012 with U-Net, CASENet, and LinkNet. (Table. 5) IX. Experiments on SNEMI3D with U-Net, CASENet, and LinkNet. (Table. 6) F1 score = 0.3880 F1 score = 0.5084 F1 score = 0.5347 F1 score = 0.5084 F1 score = 0.5386 F1 score = 0.5482
1. What are the strengths and weaknesses of the paper regarding its contributions to cell membrane segmentation? 2. How does the paper address the need for new datasets to identify potential weaknesses in current methods? 3. What are some concerns regarding the suitability of the proposed metrics for evaluating cell segmentation performance? 4. Why is it important to consider human raters' instructions and usability when evaluating segmentation results? 5. How can the lack of publication of the dataset in a challenge format limit the usefulness of the paper? 6. Are there any spelling errors or areas where the writing could be improved?
Review
Review pros: To the author's and reviewer's best knowledge, this paper includes the largest annotated public EM data set for cell membrane segmentation (in case it is published with this paper). Until now, the ISBI 2012 challenge (http://brainiac2.mit.edu/isbi_challenge/) dominates the evaluation of cell membrane segmentation in EM data, even though the performance is nearly saturated. New datasets can identify potential weaknesses in similar domains, that are not covered in current datasets and by state of the art methods, yet. The discussion about suitable segmentation metrics for cell membrane segmentation is important and must be continued. The article is written in a clear and comprehensive manner. cons: The discussion about appropriate metrics for cell segmentation, that do not depend on the thickness of the segmented cell membrane, has extensively been elaborated in "Crowdsourcing the creation of image segmentation algorithms for connectomics" by Ignacio Arganda-Carreras et al., Frontiers in Neuroanatomy 2015 (9) 142: pp. 1-13. However, this paper is not referenced and the therein proposed metrics are not mentioned or compared. The evaluated "state-of-the-art" methods are not "state-of-the-art". They do not correspond to the top entries of the current ISBI Segmentation Challenge Leaderboard. Additionally, no parameters of the methods were adapted. It is left unclear how the 20 human raters were instructed to evaluate the segmentation results. For the correct evaluation, not (only) intuitive human perception must be taken into account, but also the usability of the resulting segmentation. The segmentation results on high resolution EM data presented in this paper display many "unclosed" edges, which lead to severe problems, when using the segmentation as a basis for connectivity analysis. To the reviewer's understanding, the proposed Perceptual Hausdorff distance will hardly penalize these errors. The dataset is not published in the format of a challenge, which would allow benchmarking on a private test set. Spelling should be revised. Summary The presented new high-quality dataset is highly valuabl to the community in order to improve and develop methods for instance segmentation, specifically cell membrane segmentation. The segmentation of thin cell boundaries imposes different challenges and includes different priors, than in other domains of instance segmentation. The use of appropriate evaluation metrics is crucial to identify suitable und successful methods in experiments and must be critically discussed including domain knowledge. However, in the presented paper, the discussion about suitable metrics is not appropriately linked to the existing literature. A metric is proposed, that is (more) consistent with "human perception". This is an interesting aspect, but its contribution to the successful analysis of neuronal connectivity from EM data remains unclear.
ICLR
Title Human Perception-based Evaluation Criterion for Ultra-high Resolution Cell Membrane Segmentation Abstract Computer vision technology is widely used in biological and medical data analysis and understanding. However, there are still two major bottlenecks in the field of cell membrane segmentation, which seriously hinder further research: lack of sufficient high-quality data and lack of suitable evaluation criteria. In order to solve these two problems, this paper first introduces an Ultra-high Resolution Image Segmentation dataset for the Cell membrane, called U-RISC, the largest annotated Electron Microscopy (EM) dataset for the Cell membrane with multiple iterative annotations and uncompressed high-resolution raw data. During the analysis process of the U-RISC, we found that the current popular segmentation evaluation criteria are inconsistent with human perception. This interesting phenomenon is confirmed by a subjective experiment involving twenty people. Furthermore, to resolve this inconsistency, we propose a new evaluation criterion called Perceptual Hausdorff Distance (PHD) to measure the quality of cell membrane segmentation results. Detailed performance comparison and discussion of classic segmentation methods along with two iterative manual annotation results under existing evaluation criteria and PHD is given. 1 INTRODUCTION Electron Microscopy (EM) is a powerful tool to explore ultra-fine structures in biological tissues, which has been widely used in the research areas of medicine and biology ( ERLANDSON (2009); Curry et al. (2006); Harris et al. (2006)). In recent years, EM techniques have pioneered an emerging field called “Connectomics” (Lichtman et al. (2014)), which aims to scan and reconstruct the whole brain circuitry at the nanoscale. “Connectomics” has played a key role in several ambitious projects, including the BRAIN Initiative ( Insel et al. (2013)) and MICrONS ( Gleeson & Sawyer (2018)) in the U.S., Brain/MINDS in Japan ( Dando (2020)), and the China Brain Project ( Poo et al. (2016)). Because EM scans brain slices at the nanoscale, it produces massive images with ultra-high resolution and inevitably leads to the explosion of data. However, compared to the advances of EM, techniques of data analysis fall far behind. In particular, how to automatically extract information from massive raw data to reconstruct the circuitry map has growingly become the bottleneck of EM applications. One critical step in automatic EM data analysis is Membrane segmentation. With the introduction of deep learning techniques, significant improvements have been achieved in several public available EM datasets ISBI 2012 and SNEMI3D ( ISBI 2012 (2012); ISBI 2013 (2013); Arganda-Carreras et al. (2015b); Lee et al. (2017)). One of the earliest works ( Ciresan et al. (2012) used a succession of max-pooling convolutional networks as a pixel classifier, which estimated the probability of a pixel is a membrane. Ronneberger et al. (2015) presented a U-Net structure with contracting paths, which captures multi-contextual information. Fully convolutional networks (FCNs) proposed by Long et al. (2015) led to a breakthrough in semantic segmentation. Follow-up works based on Unet and FCN structure ( Xie & Tu (2015); Drozdzal et al. (2016); Hu et al. (2018); Zhou et al. (2018); Chaurasia & Culurciello (2017); Yu et al. (2017); Chen et al. (2019b)) have also achieved outstanding results near-human performance. Despite much progress that has been made in cell membrane segmentation for EM data thanks to deep learning, one risk to these popular and classic methods is that they might be “saturated” at the current datasets as their performance appear to be “exceedingly accurate” ( Lee et al. (2017)). How can these classic deep learning based segmentation methods work on new EM datasets with higher resolution and perhaps more challenges? Moreover, how robust of these methods when they are compared with human performance on such EM images? To expand the research of membrane segmentation on more comprehensive EM data, we first established a dataset “U-RISC” containing images with original resolution (10000 × 10000 pixels, Fig. 1). To ensure the quality of annotation, it also costs us over 10,000 labor hours to label and double-check the data. To the best of our knowledge, U-RISC is the largest uncompressed annotated and EM dataset today. Next, we tested several classic deep learning based segmentation methods on U-RISC and compared the results to human performance. We found that the performance of these methods was much lower than that of the first annotation. To understand why human perception is better than the popular segmentation methods, we examined in detail the Membrane segmentation results by these popular segmentation methods. How to measure the similarity between two image segmentation results has been widely discussed ( Yeghiazaryan & Voiculescu (2018); Niessen et al. (2000); Veltkamp & Hagedoorn (2000); Lee et al. (2017)). Varduhi Yeghiazaryan ( Yeghiazaryan & Voiculescu (2018)) discussed the family of boundary overlap metrics for the evaluation of medical image segmentation. Veltkamp. etc ( Veltkamp & Hagedoorn (2000)) formulated and summed up the similarity measures in a more general condition. In some challenges, such as ISBI2012 ( Arganda-Carreras et al. (2015a)), they also considered multiple metrics like Rand score on both original images and thinned images. However, we found there was a certain inconsistency between current most popular evaluation criteria for segmentation(e.g. F1 score, IoU) and human perception: while some figures were rated significantly lower in F1 score or IoU, they were “perceived” better by humans (Fig. 4). Such inconsistency motivated us to propose a human-perception based criterion, Perceptual Hausdorff Distance (PHD) to evaluate image qualities. Further, we set up a subjective experiment to collect human perception about membrane segmentation, and we found the PHD criteria is more consistent with human choices than traditional evaluation criteria. Finally, we found the current popular and classical segmentation methods need to be revisited with PHD criteria. Overall, our contribution in this work lies mainly in the following two parts: (1) we established the largest, original image resolution-based EM dataset for training and testing; (2) we proposed a human-perception based evaluation criterion, PHD, and verified the superiority of PHD by subjective experiments. The dataset we contributed and the PHD criterion we proposed may help researchers to gain insights into the difference between human perception and conventional evaluation criteria, thus motivate the further design of the segmentation method to catch up with the human performance on original EM images. 2 U-RISC: ULTRA-HIGH RESOLUTION IMAGE SEGMENTATION DATASET FOR CELL MEMBRANE Supervised learning methods rely heavily on high-quality datasets. To alleviate the lack of training data for cell membrane segmentation, we proposed an Ultra-high Resolution Image Segmentation dataset for Cell membrane, called U-RISC. The dataset was annotated upon RC1, a large scale retinal serial section transmission electron microscopic (ssTEM) dataset, publically available upon request and described in the work of Anderson et al. (2011). The original RC1 dataset is a 0.25mm diameter, 370 TEM slices volume, spanning the inner nuclear, inner plexiform, and ganglion cell layers, acquired at 2.18 nm/pixel across both axes and 70nm thickness in z-axis. From the 370 serial-section volume, we clipped out 120 images in the size of 10000 ×10000 pixels from randomly chosen sections. Then, we manually annotated the cell membranes in an iterative annotation-correction procedure. Since the human labeling process is very valuable for uncovering the human learning process, during the relabeling process, we reserved the intermediate results for public release. The U-RISC dataset will be released on https://Anonymous.com on acceptance. 2.1 COMPARISON WITH OTHER DATASETS ISBI 2012 (Cardona et al. (2010)) published a set of 30 images for training, which were captured from the ventral nerve cord of a Drosophila first instar larva at a resolution of 4×4×50 nm/pixel through ssTEM (Arganda-Carreras et al. (2015b); ISBI 2012 (2012)). Each image contains 512×512 pixels, spanning a realistic area of 2×2 µm approximately. In the challenge of SNEMI3D (Kasthuri et al. (2015); ISBI 2013 (2013)), the training data is a 3D stack of 100 images in the size of 1024×1024 pixels with the voxel resolution of 6×6×29 nm/pixel. The raw images were acquired at the resolution of 3×3×29 nm/pixel using serial section scanning electron microscopy (ssSEM) from mouse somatosensory cortex (Kasthuri et al. (2015); ISBI 2013 (2013)). U-RISC contains 120 pieces of annotated images (10000×10000 pixels) at the resolution of 2.18×2.18×70 nm/pixel from rabbit retina. Due to the difference of species and tissue, U-RISC can fill in the blank of annotated vertebrate retinal segmentation dataset. Besides that, U-RISC has some other characteristics which can be focused on in the future segmentation study. The first one is that the image size and realistic size of U-RISC is much larger, specifically, the image size of U-RISC is 400 and 100 times of ISBI2012 and SNEMI3D respectively, and the realistic size is 100 and 9 times of them respectively (Fig. 1 (c)), which can be applied in developing deep learning based segmentation methods according to various demands. And along with the iterative annotation procedure U-RISC actually contains 3 sets of annotation results with increasing accuracy, which could serve as ground truth at different level standard. And the total number of annotated images is 12 and 3.6 times of the public annotated images of ISBI2012 and SNEMI3D respectively (Fig. 1 (d)). An example of the image with its label is shown in the Supplementary. Due to the limitation of the size of the supplementary material, we only uploaded a quarter (5000 ×5000 pixels) size of the original image with its label. 2.2 TRIPLE LABELING PROCESS The character of high resolution in TEM image can display a much more detailed sub-cellular structure, which requests more patience to label out the cell (Fig. 2(a)). Besides, the imaging quality can be affected by many factors, such as section thickness or sample staining (Fig. 2(b)). And low imaging quality also requests more labeling efforts. Therefore, increasing labeling efforts is essential to completely annotate U-RISC. To guarantee the labeling accuracy, we set up an iterative correction mechanism in the labeling process (Fig. 3). Before starting the annotation, labeling rules were introduced to all annotators. 58 qualified annotators were allowed to participate in the final labeling process. After the first round annotation, 5 experienced lab staff with sufficient background knowledge were responsible to point out labeling errors pixel by pixel during the second and the third rounds of annotation. Finally, the third round annotation results were regarded as the final “ground truth”. And previous two rounds of manual annotations are also saved for later analysis. Fig. 3 shows an example of the two inspection processes. We can see that there are quiet a few mislabeled and missed labeled cell membranes in each round. Therefore, the iterative correction mechanism is very necessary. 3 PERCEPTION-BASED EVALUATION In the analysis of EM data, membrane segmentation is generally an indispensable key step. However, in the field of cell membrane segmentation, most of the previous studies, such as Zhou et al. (2018); Chaurasia & Culurciello (2017); Drozdzal et al. (2016), were not specifically designed for high resolution datasets such as U-RISC. In addition, although many researchers discussed various evaluation criteria for medical and general tasks, few researchers actually incorporate them into the design of the architectures of cell membrane segmentation methods. By comparing the segmentation results of the popular and classic segmentation methods, we found that the widely used evaluation criteria of segmentation were inconsistent with human perception in some cases, which is further discussed through the perceptual consistency experiment (details in Sec. 3.2). To address this issue, we proposed a new evaluation criterion called Perceptual Hausdorff Distance (PHD). The experimental results showed that it was more consistent with human perception. 3.1 INCONSISTENCY BETWEEN EXISTING EVALUATION CRITERIA AND PERCEPTION Many researchers have proposed various metrics for segmentation evaluation ( Yeghiazaryan & Voiculescu (2018); Niessen et al. (2000); Veltkamp & Hagedoorn (2000); Arganda-Carreras et al. (2015b); Lee et al. (2017)). Some of them, which are the most popular, such as F1 score, Dice Coefficient and IoU ( Sasaki et al. (2007); Dice (1945); Kosub (2019)) are used as the evaluations in most segmentation methods ( Ronneberger et al. (2015); Zhou et al. (2018); Chaurasia & Culurciello (2017); Yu et al. (2017); Chen et al. (2019b)). ISBI2012 cell segmentation challenge used Rand scores (V-Rand and V-Info) ( Arganda-Carreras et al. (2015b)) on thinned membrane for evaluation. Recently, researchers made discussions on various boundary overlap metrics for the evaluation of medical image segmentation (Yeghiazaryan & Voiculescu (2018)). The most popular evaluation criteria, such as F1 score, are based on the statistics of the degree to which pixels are classified correctly. There are also some metrics designed based on point set distance, such as ASSD ( Yeghiazaryan & Voiculescu (2018)), which is not widely used in recent deep learning researches. However, quality of segmentation should be judged with respect to the ultimate goal. When we need to use segmentation to reconstruct the whole structure of membranes and connect them, such statistics may not be consistent with human perception in cell membrane segmentation tasks. In the process of segmentation experiments, some interesting phenomena were found. Fig. 4 shows an example of the original image with its manual annotation and segmentation results by two methods GLNet ( Chen et al. (2019b)) and U-Net ( Ronneberger et al. (2015)). The scores indicated that (d) was more similar to (b) than (c). It should be noted that if these segmentation results are used for reconstructing the structure of cells, the mistakes and loss of structure will be more noticeable when subjects inspect the area surrounded by the red dashed lines in the images, Therefore we consider that (c) is a better prediction, because (d) misses some edges. The reason for the three scores of (c) are lower was that the predicted cell membrane of (c) was thicker than manual labeling. Therefore, it can be inferred that the existing evaluation criteria might not sufficiently robust to variations in the thickness and structures of the membrane, and the evaluation result was more likely inconsistent with human perception. 3.2 PERCEPTUAL CONSISTENCY EXPERIMENTS In order to verify the above conjecture, a subjective experiment was designed to explore the consistency with the existing evaluation criteria and human subjective perception. Six popular and classical segmentation methods were used to generate cell membrane segmentation results on URISC: U-net ( Ronneberger et al. (2015)), LinkNet( Chaurasia & Culurciello (2017)), CASENet ( Yu et al. (2017)), SENet ( Hu et al. (2018)), U-Net++ ( Zhou et al. (2018)),and GLNet ( Chen et al. (2019b)). Using these segmentation results, 200 groups of images were randomly selected. Each group contained 3 images: the final manual annotation (ground truth) and two automatically generated segmentation results for the same input cell image. 20 subjects were recruited to participate in the experiments. They had either a biological background or experience in cell membrane segmentation and reconstruction. For each group, each of the 20 subjects had three choices. If the subject can tell which segmentation result is more similar to the ground truth, he or she can choose which one. Otherwise, the subject can choose “Difficult to choose”. The experiment interface is shown in the Appendix I. Before the experiment, the subjects were trained on the purpose and source of the images. During the experiment, 200 groups of images were divided into four groups on average in order to prevent the subjects from choosing randomly due to fatigue. For each batch of groups, the subjects needed to complete the judgment continuously without interruption. After the experiment, for each group, if there were more than 10 votes of the same number, it was called a valid group. Otherwise, it was invalid and discarded. There were a total of 113 valid groups. Then, based on these valid groups, the consistency of the F1 score, IoU, and Dice with human choices was calculated. According to our experimental results, the consistency of F1 score, IoU, and Dice with human choice was only 34.51%, 35.40%, and 34.51%, respectively. Therefore, it can be inferred that the three criteria are not consistent with human subjective perception in most cases. More results and design of subjective experiments are shown in the Appendix II, and IV,. 3.3 PERCEPTUAL HAUSDORFF DISTANCE Based on the subjective experimental results, it was verified that the widely used evaluation criteria for general segmentation were inconsistent with human perception of cell membrane segmentation. This paper proposes a new evaluation standard based on human perception, namely, Perceptual Hausdorff Distance (PHD for short), considering the structure but ignoring the thickness of cell membrane. An Overview of PHD. As Fig. 4 shows, from the perspective of neuronal reconstruction, the thickness of the cell membrane is not the key for evaluation. In fact, when the goal is to reconstruct the structure of cells, humans will pay more attention on structure changes, instead of thickness changes. Hence, when measuring the similarity of two cell membrane segmentation results, in order to eliminate the influence of thickness, the segmentation results of two cell membranes were skeletonized, and then the distance between two skeletons was calculated to measure the difference. Since the skeleton is a collection of different points, and Hausdorff distance is a common distance to calculate the difference between two sets of points, the proposed PHD is built upon Hausdorff distance. On the other hand, through subjective experiments, it was found that people tend to ignore the slight offset between the membrane. Therefore, based on the above two considerations, the Perceptual Hausdorff Distance (PHD) based on Hausdorff distance( Huttenlocher et al. (1993); Aspert et al. (2002); Rachasingho & Tasena (2020)) with modification was designed. Fig. 5 shows the overview of PHD. The details are as follows. Step 2. Calculate the distance between skeletons. Hausdorff distance is a common distance used to calculate the difference between two point sets. Consider two unordered nonempty sets of points X and Y and the Euclidean distance d(x,y) between two point sets. The Hausdorff distance between X and Y is defined as dH(X,Y) = max { dX,Y, dY,X } = max { max x ∈ X { min y ∈ Y d(x,y)}, max y ∈ Y { min x ∈ X d(x,y)} } , (1) which can be understood as the maximum value of the shortest distance from a point set to another point set. It is easy to prove that the Hausdorff distance is a metric ( Choi (2019)). In the task of cell membrane segmentation, we should pay attention to the global distance between two point sets, while Hausdorff distance is sensitive to outliers in two point sets. Therefore, the average distance of the two point sets is obtained naturally by using the average operation instead of all the max operations. Furthermore, it was found that people have tolerance for the small offset between segmentation results. Specifically, if the distance between two points is very small, people tend to ignore it. Therefore, a concept called Tolerance Distance t is defined, which represents human tolerance for small errors. The Perceptual Hausdorff Distance (PHD) is defined as Eq. 2. dPHD(X,Y) = 1 |X| ∑ x ∈ X min y ∈ Y d∗(x,y) + 1 |Y| ∑ y ∈ Y min x ∈ X d∗(x,y), (2) d∗(x,y) = {‖x− y‖, ‖x− y‖ > t 0, ‖x− y‖ ≤ t (3) To intuitively understand the influence of tolerance distance in PHD, toy cases (a) and (b) as shown in Fig. 5 are taken as examples. In case (a), the blue skeleton scored 19 points while the orange one scored 18 points. Two skeletons are close in the Euclidean Space but do not coincide. Among all the Euclidean distance d(x,y) of x ∈ X and y ∈ Y, the max distance is 2 pixels, and the most common distance is 1. When t = 0, which means no mistake can be tolerated, and the PHD is high. If t = 1, the PHD value drops a lot. When the t = 2, PHD becomes 0. In case (b), there is a large offset between two skeletons. When the t is set to [2, 4], the decline of PHD value is slow. When t = 6, it drops to 0, which is the max distance between two point sets of skeletons. The Different settings of t represent the degree of tolerance to the distance between the two skeletons. In practical applications, different tolerance distances can be adopted according to different situations. Consistency between PHD and human perception. The consistency with human perception of PHD and existing related criteria (TPVF, TNVF, Prec, RVD, Hausdorff, ASSD, V-Rand, and V-Info) based on the subjective experimental results were also calculated (as described in Section 3.2, the formulas can be found in Appendix V). The result showed that compared with other criteria, PHD with appropriate tolerance distance was more consistent with human perception. As shown in Fig. 6, while tolerance distance t of PHD increasing from 0 to 800, PHD’s consistency to human perception rose first and then dropped slowly to 0, suggesting human vision does have tolerance for certain offset. Specifically, the maximum value can be reached at 65.48%, when tolerance distance t was set to 3, suggesting that our perceptions prefer to tolerant small perturbations. It was worthy to note that the optimal PHD score (65.48%) was nearly double of the consistency scores obtained by pixel-error based metrics, such as F1 score. Our experiment shows that most of these compared criteria in color bars can be improved by skeletonizing the segmentation results before evaluation to a certain extent. In Fig. 6, the consistency of these criteria with human perception calculated based on original images can only reach about 30%, while they can improve about 10% on skeletons. Even the best of these metrics (ASSD) on skeletons can only achieve 52.43%, which is significantly lower than the score of PHD performance with t = 3. Therefore, it can be concluded that the PHD performs more consistently with human. 4 RE-EXAMINING PHD ON CLASSIC DEEP LEARNING BASED SEGMENTATION METHODS WITH U-RISC In the previous two sections, we proposed a new ultra-high resolution cell membrane segmentation dataset U-RISC and a new perceptual criteria PHD to help solve the two bottlenecks in the field of cell membrane segmentation. The subjective experiment on a small-scale dataset demonstrated that PHD is more consistent with human perception for the evaluation of cell membrane segmentation than some widely used criteria. In order to understand the performance of deep learning methods on the U-RISC dataset, we conducted an in depth investigation on U-RISC with representative deep learning based segmentation methods and different evaluation criteria. To be specific, we chose 6 representative algorithms ( U-net (Ronneberger et al. (2015)), LinkNet(Chaurasia & Culurciello (2017)), CASENet (Yu et al. (2017)), SENet (Hu et al. (2018)), U-Net++ (Zhou et al. (2018)),and GLNet (Chen et al. (2019b))) and re-implemented them on U-RISC dataset. Then four evaluation criteria were used to compare the segmentation results: F1 score, IoU, TPVF, TNVF, Prec, RVD, Hausdorff, ASSD, V-Rand, and V-Info, and PHD. As mentioned in Sec. 2, the results of the first two rounds of manual labeling results are retained. Therefore, the results of manual annotation under different evaluation criteria will also be analyzed. Experiment Settings. All the six methods use same training data and testing data to compare the performance. And the parameters and loss functions are same as they are proposed in their references. The parameters for each method and other details are shown in Appendix V. Experiments Results. The experiment results were shown in Table. 1. The table showed the scores of different evaluation criteria on the first two rounds of manual annotation results and six segmentation results with the ground truth. Our first finding was that U-RISC was a challenging dataset in the field of cell membrane segmentation. As shown in Table. 1,the performance of deep learning based methods gained around 0.6 in F1-scores, far below the human level (0.98-0.99) (the first annotation performace) on U-RISC dataset, by contrast, they all exceeded 0.95 on the ISBI 2012. Despite possible improvements by parameter tuning, to such ultra-high resolution images, there was clearly a huge gap between the current popular segmentation methods and human performance. Our second finding was that evaluation rankings for F1-score, IoU, V-Rand-sk, and V-Info-sk were more consistent with each other, but different from PHD-based rankings. Specifically, while PHD tended to choose CASNet, none of the other metrics chose CASNet as the best choice. According to the subjective experimental results of Sec. 3.2, PHD was much closer to human perception. Therefore, the change of ranking led by PHD may also inspire researchers to re-consider the evaluation criteria for cell membrane segmentation algorithms. It also provides a new perspective for promoting the development of segmentation algorithms. Discussion. Based on the results of these six algorithms, it can be seen that LinkNet and CASENet are better than other methods. From the perspective of network design, LinkNet makes full use of the low-level local information and directly connects the low-level encoder to the decoder of corresponding size. This design pays more attention to the capture of local information, which leads to a more accurate local prediction. CASENet takes full account of the continuity of the edge and makes the low-level features strengthen the high-level semantic information by jumping links between the low-level feature and high-level feature, which pays more attention to structural information. Therefore, the design of LinkNet might be preferred by the traditional evaluation criteria, while CASENet might be preferred by the PHD. This also explains why the two methods rank differently under these two types of evaluation criteria. More local segmentation results of different algorithms are shown in the Appendix III. In addition, as an example, we add experiments using U-Net, CASENet, and LinkNet on ISBI2012 and SNMI3D datasets (Appendix VI, VIII, IX). The results in Appendix VI show that U-Net with our chosen parameters can perform close to SOTA on ISBI2012 (ours: V-Rand=0.9689,V-Info=0.9723; SOTA: V-Rand=0.9837, V-Info=0.9878 (on skeleton)) and SNEMI3D (ours:V-Rand=0.9389; SOTA: V-Rand=0.9751 (on skeleton)), although we made little efforts in parameter tuning. However, with the same parameter setting, U-Net gets poor scores (V-Rand=0.5288, V-Info=0.5178) on U-RISC. Such a big gap in its performance between U-RISC and previous datasets suggests the challenge from U-RISC dataset, which hopefully will motivate novel designs of machine learning methods in the future. And the results in Appendix VIII and IX show that evaluation rankings for F1-score, IoU, V-Rand-sk, and V-Info-sk are more consistent with each other, but they are different from PHD-based rankings. 5 DISCUSSION AND CONCLUSION This paper aims to solve the two bottlenecks in the development of cell membrane segmentation. Firstly, we proposed U-RISC,Ultra-high Resolution Image Segmentation dataset for Cell membrane, the largest annotated EM dataset for the Cell membrane so far. To our best knowledge, U-RISC is the only uncompressed annotated EM dataset with multiple iterative annotations and uncompressed high-resolution raw image data. During the analysis process of the U-RISC, we found a certain inconsistency between current evaluation criteria for segmentation (e.g. F1 score, IoU) and human perception. Therefore, this article secondly proposed a human-perception based evaluation criterion, called Perceptual Hausdorff Distance (PHD). Through a subjective experiment on a smallscale dataset, experiments results demonstrated that the new criterion is more consistent with human perception for the evaluation of cell membrane segmentation. In addition, the evaluation criteria of PHD and existing classic deep learning segmentation methods are re-examined. In future research, we will consider how to improve deep learning segmentation methods from the perspective of cell membrane structure and apply PHD criterion for connectomics research. More disccusions are shown in Appendix VII. A APPENDIX I. Fig. 7 is the interface of perceptual consistency experiments. II. Fig. 8 and Fig. 9 are some examples of subjective experiment images. III. Fig. 10 and Fig. 11 are some examples of segmentation results of different algorithms. IV. Experiment Details. 4.1 Subjective Experiment 1) Firstly, the 20 human raters were introduced the value of cell membrane segmentation to connectivity and the importance of structure before testing. And then we used several simple examples to teach them about the experimental process. 2) During the formal experiment, the distribution and selection of data were random. The subjects only need to choose one of the two images that they think is more similar to the ground truth. 3) The 200 groups of images for subjective experiment were randomly selected from the segmentation results produced by the above six methods. Therefore, the training data were from the same dataset as those that were used to create the 200 groups of images that the 20 humans evaluated the results. 4) In order to ensure the continuity of the experiment, each subject was asked to judge each group of images within a specified time (less than 10 minutes). 5) In order to prevent the subjects from fatigue in a long period of experiments, 200 groups of images were distributed to the subjects four times on average. 4.2 Experiments on U-RISC dataset. All the six methods use same training data and testing data to compare the performance. And the parameters and loss functions are same as they are proposed in their references. In the training stage, 60% of the dataset was used as the training data, and then the original image was randomly cut into 1024 × 1024 patches to generate 50,000 training images and 20,000 validation images. Random flipping and clipping were used for data augmentation. Four V100 GPUs were used to train each algorithm. In the testing stage, the original image was cut into the same size of training image, and the patch was tested. These patches were eventually spliced back to the original size for evaluation. V. Formulas of criteria mentioned in the texture. The formulas of metrics we compared are shown in Table 3. The symbols in formulations are explained as follows. - Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of the total amount of relevant instances that were actually retrieved. - TP (true positives), TN (true negatives), FP (false positives), and FN(false negatives) compare the results of the classifier under prediction with ground truth. The terms positive and negative refer to the classifier’s prediction, and the terms true and false refer to whether that prediction corresponds to the ground truth. - X and Y are two point sets. d(x,y) are the points in X and Y respectively. - In V-Rand (Arganda-Carreras et al. (2015b)), suppose that S is the predicted segmentation and T is the ground truth segmentation. Define pi,j as the probability that a randomly chosen pixel belongs to segment i in S and segment j in T . This joint probability distribution satisfies the normalization condition ∑ ij pij = 1. The marginal distribution si = ∑ j pij is the probability that a randomly chosen pixel belongs to segment i in S, and the marginal distribution tj = ∑ i pij is defined similarly. - In V-Info (Arganda-Carreras et al. (2015b)), the mutual information I(S;T ) =∑ ij pij log pij − ∑ i si log si − ∑ j tj log tj is a measure of similarity between S and T . H(S) = − ∑ i si log si is the entropy function. VI. Experiments on ISBI2012 and SNEMI3D with U-Net. (Table. 4) VII. Further Discussion. Although the content of this paper is mainly involved in EM cell segmentation, we think the significance is beyond. According to experimental results, the popular methods do not perform well on our dataset (exceeded 95% on ISBI2012, while about 60% on U-RISC). It shows that U-RISC is a challenging dataset that can promote the development of related machine learning and deep learning methods. The U-RISC dataset may reveal several classic challenges in the field that haven’t been solved: One challenge might be the “imbalance problem of samples” ( Alejo et al. (2016); Li et al. (2010); Zhang et al. (2020)). Due to ultra-high resolution images, the pixels of labeled cell membranes only account for 5.64% of total pixels in training sets, in contrast to 21.96% in ISBS2012 and 33.23% in SNEMI3D. The future design of deep learning methods on U-RISC will have to solve this issue. Some other challenges might include, e.g. ultra high-resolution image segmentation ( Demir et al. (2018); Zhao et al. (2018); Chen et al. (2019a), appropriate loss function design ( Sudre et al. (2017); Spiring (1993); Choromanska et al. (2015)),and the issues related to ”unclosed” edges as suggested by the reviewer 3. Taken together, we strongly believe that the U-RISC dataset will have great contribution in technique novelty, by revealing defects in the existing popular methods and promoting novel algorithms for solving classic challenges in machine learning or deep learning community. In addition, the design of evaluation criteria has been widely concerned in the field of computer science ( Gerl et al. (2020); Lin et al. (2015); Liu et al. (2018)). The PHD we proposed may inspire researchers from a new perspective and further promote the developments of algorithms. The technical novelties of the PHD metric lie in many aspects. To list a few, (1) It can be potentially used in other tasks, such as vascular segmentation ( Gerl et al. (2020)), bone segmentation ( Lin et al. (2015)), edge detection ( Liu et al. (2018)), and other tasks related to structural and shape information. For example, ( Gerl et al. (2020)) successfully used a distance-based criterion to improve skin layer segmentation in optoacoustic images. (2) It can be modified into loss functions which is also part of our on-going work. It is worthy to note that some works have successfully integrated Hausdorff distance into the loss function ( Genovese et al. (2012); Karimi & Salcudean (2019); Ribera et al. (2019). VIII. Experiments on ISBI2012 with U-Net, CASENet, and LinkNet. (Table. 5) IX. Experiments on SNEMI3D with U-Net, CASENet, and LinkNet. (Table. 6) F1 score = 0.3880 F1 score = 0.5084 F1 score = 0.5347 F1 score = 0.5084 F1 score = 0.5386 F1 score = 0.5482
1. What are the strengths and weaknesses of the paper regarding its contributions to cell membrane segmentation? 2. How does the proposed Perceptual Hausdorff Distance (PHD) metric differ from traditional Hausdorff distance, and what are the benefits of using PHD in EM-based research? 3. Are there any limitations to the techniques used in the paper, specifically regarding machine learning or deep learning methods? 4. What additional experiments or comparisons should be conducted to further validate the effectiveness of the proposed PHD metric in different EM datasets?
Review
Review Strength: (1) This work proposed an ultra-high-resolution image segmentation dataset for the cell membrane, named U-RISC. The proposed U-RISC is the largest annotated Electron Microscopy (EM) dataset for the cell membrane with multiple iterative annotations and uncompressed high-resolution raw data. Given the uniqueness of the proposed dataset, it is likely to contribute to the future EM based research, such as membrane segmentation. (2) For membrane segmentation in EM images, the authors develop a human-perception based evaluation criterion, called Perceptual Hausdorff Distance (PHD). Based on the experiments on a small-scale dataset. the proposed PHD metric is better consistency with human perception than the traditional ones. Weakness: (1) The first concern is on the proposed PHD metrics. The reviewer thinks that there is a lack of comparison analysis between the proposed PHD with the traditional Hausdorff distance. According to Equation 2, the PHD metric is build based on the Hausdorff distance. Therefore, it is necessary to include comparison analysis of using the traditional Hausdorff distance. This will further highlight the novelty of the PHD metric proposed in this paper. (2) The second concern is the limit technique novelty. The overall contributions of this paper have two parts: establishing a new dataset, and proposing a new PHD metric for the EM membrane segmentation tasks. However, there is no contributions based on the machine learning or deep learning based methods. The reviewer agrees that this manuscript has made some contributions on biomedical image analysis. However, the reviewer thinks this paper cannot meet the requirement of the ICLR conference. (3) The third concern is the limit validation experiments on the PHD metric. Since the proposed PHD metric is particularly for membrane segmentation, it is supposed to be effective on other EM datasets, such as the ISBI2012 and SNEMI3D challenges. However, the authors did not conduct comparison methods on any other EM datasets. It would be more convincing to conduct experimental analysis on various EM membrane segmentation tasks.
ICLR
Title Sequence generation with a physiologically plausible model of handwriting and Recurrent Mixture Density Networks Abstract The purpose of this study is to explore the feasibility and potential benefits of using a physiological plausible model of handwriting as a feature representation for sequence generation with recurrent mixture density networks. We build on recent results in handwriting prediction developed by Graves (2013), and we focus on generating sequences that possess the statistical and dynamic qualities of handwriting and calligraphic art forms. Rather than model raw sequence data, we first preprocess and reconstruct the input training data with a concise representation given by a motor plan (in the form of a coarse sequence of ‘ballistic’ targets) and corresponding dynamic parameters (which define the velocity and curvature of the pen-tip trajectory). This representation provides a number of advantages, such as enabling the system to learn from very few examples by introducing artificial variability in the training data, and mixing of visual and dynamic qualities learned from different datasets. 1 INTRODUCTION Recent results (Graves, 2013) have demonstrated that, given a sufficiently large training data-set, Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) Recurrent Mixture Density Networks (RMDNs) (Schuster, 1999) are capable of learning and generating convincing synthetic handwriting sequences. In this study we explore a similar network architecture combined with an intermediate feature representation, given by the parameters of a physiologically plausible model of handwriting: the Sigma Lognormal model (Plamondon, 1995; Plamondon et al., 2014). In the work by Graves (2013) and subsequent derivations, the RMDN operates on raw sequences of points recorded with a digitizing device. In our approach we preprocess the training data using an intermediate representation that describes a form of “motor program” coupled with a sequence of dynamic parameters that describe the evolution of the pen tip. By doing so, we use a representation that is more concise (i.e. lower in dimensionality), meaningful (i.e. every data point is a high level segment descriptor of the trajectory), and is resolution independent. This project stems from the observation that human handwriting results from the orchestration of a large number of motor and neural subsystems, and is ultimately produced with the execution of complex and skillful motions. As such we seek a representation that abstracts the complex task of trajectory formation from the neural network, which is then rather focused on a higher level task of movement planning. Note that for the scope of this study, we do not implement text-to-handwriting synthesis (Graves, 2013), but rather focus on the task of generating sequences that possess the statistical and dynamic qualities of handwriting, which can be expanded to calligraphy, asemic handwriting, drawings and graffiti (Berio & Leymarie, 2015; Berio et al., 2016)). In particular, we focus on two distinct tasks: (1) learning and generating motor plans and (2) given a motor plan, 1Goldsmiths College, University of London. Department of Computing. 2École Polytechnique de Montréal, Canada. predicting the corresponding dynamic parameters that determine the visual and dynamic qualities of the pen trace. We then go on to show that this modular workflow can be exploited in ways such as: mixing of dynamic qualities between data-sets (a form of handwriting “style transfer” ) as well as learning from small datasets (a form of “one shot learning”). The remainder of this paper is organised as follows: in Section 2, after briefly summarising the background context, we then briefly describe the Sigma Lognormal model and RMDNs; in Section 3 we present the data preprocessing step and the RMDN models that build up our system; in Section 4 we propose various applications of the system, including learning handwriting representations from small datasets and mixing styles. 2 BACKGROUND Our study is grounded on a number of notions and principles that have been observed in the general study of human movement as well as in the handwriting synthesis/analysis field (known as Graphonomics (Kao et al., 1986)). The speed profile of aiming movements is typically characterised by a “bell shape” that is variably skewed depending on the rapidity of the movement (Lestienne, 1979; Nagasaki, 1989; Plamondon et al., 2013). Complex movements can be described by the superimposition of a discrete number of “ballistic” units of motion, which in turn can each be represented by the classic bell shaped velocity profile and are often referred to as strokes. A number of methods synthesise handwriting through the temporal superimposition of strokes, the velocity profile of which is modelled with a variety of functions including sinusoidal functions (Morasso & Mussa Ivaldi, 1982; Maarse, 1987; Rosenbaum et al., 1995), Beta functions (Lee & Cho, 1998; Bezine et al., 2004), and lognormals (Plamondon et al., 2009). In this study we rely on a family of models known as the Kinematic Theory of Rapid Human Movements, that has been developed by Plamondon et al. in an extensive body of work since the 1990’s (Plamondon, 1995; Plamondon et al., 2014). Plamondon et al. (2003) show that if we consider that a movement is the result of the parallel and hierarchical interaction of a large number of coupled linear systems, the impulse response of such a system to a centrally generated command asymptotically converges to a lognormal function. This assumption is attractive from a modelling perspective because it abstracts the high complexity of the neuromuscular system in charge of generating movements with a relatively simple mathematical model which further provides state of the art reconstruction of human velocity data (Rohrer & Hogan, 2006; Plamondon et al., 2013). A number of methods have used neural inspired approaches for the task of handwriting trajectory formation (Schomaker, 1992; Bullock et al., 1993; Wada & Kawato, 1993). Similarly to our proposed method, Ltaief et al. (2012) train a neural on a preprocessed dataset where the raw input data is reconstructed in the form of handwriting model parameters. Nair & Hinton (2005) use a sequence of neural networks to learn the motion of two orthogonal mass spring systems from images of handwritten digits for classification purposes. With a similar motivation to ours, Plamondon & Privitera (1996) use a Self Organising Map (SOM) to learn a sequence of ballistic targets, which describe a coarse motor plan of handwriting trajectories. Our method builds in particular on the work of Graves (2013), who describes a system that uses a recurrent mixture density networks (RMDNs) (Bishop, 1994) extended with a LSTM architecture (Hochreiter & Schmidhuber, 1997), to generate synthetic handwriting in a variety of styles. 2.1 SIGMA LOGNORMAL MODEL On the basis of Plamondon’s Kinematic Theory (Plamondon, 1995), the Sigma Lognormal (ΣΛ) model (Plamondon & Djioua, 2006) describes complex handwriting trajectories via the vectorial superimposition of a discrete number of strokes. With the assumption that curved handwriting movements are done by rotating the wrist, the curvilinear evolution of strokes is described with a circular arc shape. Each stroke is charactersied by a variably assymmetric "bell shape" speed profile, which is described with a (3 parameter) lognormal function. The planar evolution of a trajectory is then described by a sequence of virtual targets {vi}i=mi=1 , which define “imaginary” (i.e. not necessarily located along the generated trajectory) loci at which each consecutive stroke is aimed. The virtual targets provide a low level description of the motor plan for the handwriting trajectory. A smooth trajectory is then generated by integrating the velocity of each stroke over time. The trajectory smoothness can be defined by adjusting the activation-time offset of a given stroke with respect to the previous stroke, which is denoted with ∆t0i; a smaller time offset (i.e. a greater overlap between lognormal components) will result in a smoother trajectory (Fig. 1, c). The curvature of the trajectory can be varied by adjusting the central angle of each circular arc, which is denoted with θi. Equations and further details for the ΣΛ model can be found in Appendix A. A sequence of virtual targets provides a very sparse spatial description or “motor plan” for the trajectory evolution. The remaining stroke parameters, ∆t0i and θi, define the temporal, dynamic and geometric features of the trajectory and we refer to those as dynamic parameters. 2.2 RECURRENT MIXTURE DENSITY NETWORKS Mixture Density Networks (MDN) were introduced by Bishop (1994) in order to model and predict the parameters of a Gaussian Mixture Model (GMM), i.e. a set of means, covariances and mixture weights. Schuster (1999) showed that MDNs could be to model temporal data using RNNs. The author used Recurrent Mixture Density Networks (RMDN) to model the statistical properties of speech, and they were found to be more successful than traditional GMMs. Graves (2013) used LSTM RMDNs to model and synthesise online handwriting, providing the basis for extensions to the method, also used in Ha et al. (2016); Zhang et al. (2016). Note that in the case of a sequential model, the RMDN outputs a unique set of GMM parameters for each timestep t, allowing the probability distribution to change with time as the input sequence develops. Further details can be found in Appendix C.1. 3 METHOD We operate on discrete and temporally ordered sequences of planar coordinates. Similarly to Graves (2013), most of our results come from experiments made on the IAM online handwriting database (Marti & Bunke, 2002). However, we have made preliminary experiments with other datasets, such as the Graffiti Analysis Database (Lab, 2009) as well as limited samples collected in our laboratory from a user with a digitiser tablet. As a first step, we preprocess the raw data and reconstruct it in the form of ΣΛ model parameters Section 3.1. We then train and evaluate a number of RMDN models for two distinct tasks: 1. Virtual target prediction. We use the V2V-model for this task. Given a sequence of virtual targets, this model predicts the next virtual target. 2. Dynamic parameter prediction. For this task we trained and compared two model architectures. Given a sequence of virtual targets, the task of these models is to predict the corresponding dynamic parameters. The V2D-model is condititioned only on the previous virtual targets, whereas the A2D-model is conditioned on both the previous virtual targets and dynamic parameters. We then exploit the modularity of this system to conduct various experiments, details of which can found in Section 4. 3.1 PREPROCESSING: RECONSTRUCTING ΣΛ PARAMETERS A number of methods have been developed by Plamondon et. al in order to reconstruct ΣΛ-model parameters from digitised pen input data (O’Reilly & Plamondon, 2008; Plamondon et al., 2014; Fischer et al., 2014). These methods provide the ideal reconstruction of model parameters, given a high resolution digitised pen trace. While such methods are superior for handwriting analysis and biometric purposes, we opt for a less precise method (Berio & Leymarie, 2015) that is less sensitive to sampling quality and is aimed at generating virtual target sequences that remain perceptually similar to the original trace. We purposely choose to ignore the original dynamics of the input, and base the method on a geometric input data only. This is done in order to work with training sequences that are independent of sampling rate, and in sight of future developments in which we intend to extract handwriting traces from bitmaps, inferring causal/dynamic information from a static input as humans are capable of (Edelman & Flash, 1987; Freedberg & Gallese, 2007). Our method operates on a uniformly sampled input contour, which is then segmented in correspondence with perceptually salient key points: loci of curvature extrema modulated by neighbouring contour segments (Brault & Plamondon, 1993; Berio & Leymarie, 2015), which gives an initial estimate of each virtual target vi. We then (i) fit a circular arc to each contour segment in order to estimate the θi parameters and (ii) estimate the ∆t0i parameters by analysing the contour curvature in the region of each key point. Finally, (iii) we iteratively adjust the virtual target positions to minimise the error between the original trajectory and the one generated by the corresponding ΣΛ parameters. For Further details on the ΣΛ parameter reconstruction method, the reader is referred to Appendix B. 3.2 DATA AUGMENTATION We can exploit the ΣΛ parameterisation to generate many variations over a single trajectory, which are visually consistent with the original, and with a variability that is similar to the one that would be seen in multiple instances of handwriting made by the same writer (Fig. 3) (Djioua & Plamondon, 2008a; Fischer et al., 2014; Berio & Leymarie, 2015). Given a dataset of n training samples, we randomly perturb the virtual target positions and dynamic parameters of each sample np times, which results in a new augmented dataset of size n+ n× np where legibility and trajectory smoothness is maintained across samples. This would not be possible on the raw online dataset, as perturbations for each data-point would eventually result in a noisy trajectory. 3.3 PREDICTING VIRTUAL TARGETS WITH THE V2V-MODEL The V2V-model is conditioned on a history of virtual targets and given a new virtual target it predicts the next virtual target (hence the name V2V). Note that each virtual target includes the corresponding pen state — up (not touching the paper) or down (touching the paper). Repeatedly feeding the predicted virtual target back into the model at every timestep allows the model to synthesise sequences of arbitrary length. The implementation of this model is very similar to the handwriting prediction demonstrated by Graves (2013), although instead of operating directly on the digitised pen positions, we operate on the much coarser virtual target sequences which are extracted during the preprocessing step. The details of this model can be found in Appendix C.3 3.4 PREDICTING DYNAMIC PARAMETERS WITH THE V2D AND A2D MODELS The goal of these models is to predict the corresponding dynamic parameters (∆t0i, θi) for a given sequence of virtual targets. We train and compare two model architectures for this task. The V2Dmodel is conditioned on the history of virtual targets, and given a new virtual target, this model predicts the corresponding dynamic parameters (∆t0i, θi) for the current stroke (hence the name V2D). Running this model incrementally for every stroke of a given virtual target sequence allows us to predict dynamic parameters for each stroke. The implementation of this model is very similar to the V2V-model, and details can be found in Appendix C.4. At each timestep, the V2D model outputs and maintains internal memory of a probability distribution for the predicted dynamic parameters. However, the network has no knowledge of the parameters that are sampled and used. Hence, dynamic parameters might not be consistent across timesteps. This problem can be overcome by feeding the sampled dynamic parameters back into the model at the next timestep. From a human motor planning perspective this makes sense as, for a given drawing style, when we decide the curvature and smoothness of a stroke we will take into consideration the choices made in previously executed strokes. The A2D model predicts the corresponding dynamic parameters (∆t0i, θi) for the current stroke conditioned on a history of both virtual targets and dynamic parameters (i.e. all ΣΛ parameters - hence the name A2D). We use this model in a similar way to the V2D model, whereby we run it incrementally for every stroke of a given virtual target sequence. However, internally, at every timestep the predicted dynamic parameters are fed back into the model at the next timestep along with the virtual target from the given sequence. The details of this implementation can be found in Appendix C.5. 4 EXPERIMENTS AND RESULTS Predicting Virtual Targets. In a first experiment we use the V2V model, trained on the preprocessed IAM dataset, to predict sequences of virtual targets. We prime the network by first feeding it a sequence from the test dataset. This conditions the network to predict sequences that are similar to the prime. We can see from the results (Fig. 4) that the network is indeed able to produce sequences that capture the statistical qualities of the priming sequence, such as overall incline, proportions, and oscillation frequency. On the other hand, we observe that amongst the generated sequences, there are often patterns which do not represent recognisable letters or words. This can be explained by the high variability of samples contained in the IAM dataset, and by the fact that our representation is very concise, with each data-point containing high significance. As a result, the slightest variation in a prediction is likely to cause a large error in the next. To overcome this problem, we train a new model with a dataset augmented with 10× variations as described in Section 3.2. Due to our limited computing resources 1, we test this method on 1/10th of the dataset, which results in a new dataset with the same size as the original, but with a lower number of handwriting specimens with a number of subtle variations per specimen. With this approach, the network predictions maintain statistical similarity with the priming sequences, and patterns emerge that are more evocative of letters of the alphabet or whole words, with fewer unrecognizable patterns (Fig. 4). To validate this result, we also test the model’s performance training it on 1/10th of the dataset, without data augmentation, and the results are clearly inferior to the previous two models. This suggests that the data augmentation step is highly beneficial to the performance of the network. 1We are thus not able to thoroughly test the large network architectures that would be necessary to train on the whole augmented dataset. Predicting Dynamic Parameters. We first evaluate the performance of both the V2D and A2D models on virtual targets extracted from the test set. Remarkably, although the networks have not been trained on these sequences, both models predict dynamic parameters that result in trajectories that are readable, and are often similar to the target sample. We settle on the A2D model trained on a 3× augmented dataset, which we qualitatively assess to produce the best results (Fig. 5). We then proceed with applying the same A2D model on virtual targets generated by the V2V models primed on the test set. We observe that the predictions on sequences generated with the augmented dataset are highly evocative of handwriting and are clearly different depending on the priming sequence (Fig. 6, c), while the predictions made with the non-augmented dataset are more likely to resemble random scribbles rather than human readable handwriting (Fig. 6, b). This further confirms the utility of the data augmentation step. User defined virtual targets. The dynamic parameter prediction models can also be used in combination with user defined virtual target sequences (Fig. 7). Such a method can be used to quickly and interactively generate handwriting trajectories in a given style, by a simple point and click procedure. The style (in terms of curvature and dynamics) of the generated trajectory is determined by the data used to train the A2D model, and by priming the A2D model with different samples, we can apply different styles to the user defined virtual targets. One shot learning. In a subsequent experiment, we apply the data augmentaion method described in Section 3.2 to enable both virtual target and dynamic prediction models to learn from a small dataset of calligraphic samples recorded by a user using a digitiser tablet. We observe that with a low number of augmentations (50×) the models generate quasi-random outputs, and seem to learn only the left to right trend of the input. With higher augmentation (700×), the system generates outputs that are consistent to the human eye with the input data (Fig. 8). We also train our models using only a single sample (augmented 7000×) and again observe that the model is able to reproduce novel sequences that are similar to the input sample (Fig. 9). Naturally, the output is a form of recombination of the input, but this is sufficient to synthesise novel outputs that are qualitatively similar to the input. It should be noted that we are judging the performance of the one-shot learned models qualitatively, and we may not be testing the full limits of how well the models are able to generalise. On the other hand, these results, as well as the “style transfer” capabilities exposed in following section suggest a certain degree of generalisation. Style Transfer. Here, with a slight abuse of terminology, we utilise the term "style" to refer to the dynamic and geometric features (such as pen-tip acceleration and curvature) that determine the visual qualities of a handwriting trajectory. Given a sequence of virtual targets generated with the V2V model trained on one dataset, we can also predict the corresponding dynamic parameters with the A2D model trained on another. The result is an output that is similar to one dataset in lettering structure, but possesses the fine dynamic and geometric features of the other. If we visually inspect Fig. 10, we can see that both the sequence of virtual targets reconstructed by the dataset preprocessing method, and the trajectory generated over the same sequence of virtual targets with dynamic parameters learned from a different datasets, are both readable. This emphasises the importance of using perceptually salient points along the input for estimating key-points in the data-set preprocessing step (Section 3.1). Furthermore, we can perform the same type of operation within a single dataset, by priming the A2D model with the dynamic parameters of a particular training example, while feeding it with the virtual targets of another. To test this we train both (V2V, A2D) models on a corpus containing 5 samples of the same sentence written in different styles and then augmented 1400× (Fig. 11). We envision the utility of such as system in combination with virtual targets interactively specified by a user. 5 CONCLUSIONS AND FUTURE WORK We have presented a system that is able to learn the parameters for a physiologically plausible model of handwriting from an online dataset. We hypothesise that such a movement centric approach is advantageous as a feature representation for a number of reasons. Using such a representation provides a performance that is similar to the handwriting prediction demonstrated by Graves (2013) and Ha et al. (2016), with a number of additional benefits. These include the ability to: (i) capture both the geometry and dynamics of a hand drawn/written trace with a single representation, (ii) express the variability of different types of movement concisely at the feature level, (iii) demonstrate greater flexibility for procedural manipulations of the output, (iv) mix “styles” (applying curvature and dynamic properties from one example, to the motor plan of another), (v) learn a generative model from a small number of samples (n < 5), (vi) generate resolution independent outputs. The reported work provides a solid basis for a number of different future research avenues. As a first extension, we plan to implement the label/text input alignment method described in Graves’ original work that should allow us to synthesise readable handwritten text and also to provide a more thorough comparison of the two methods. Our method strongly relies on an accurate reconstruction of the input in the preprocessing step. Improvements should target especially parts of the latter method that depend on user tuned parameters, such as the identification of salient points along the input (which requires a final peak detection pass), and measuring the sharpness of the input in correspondence with salient points. ACKNOWLEDGMENTS The system takes as a starting point the original work developed by (Graves, 2013). We use Tensorflow, the open-source software library for numerical computation and deep learning (Abadi et al., 2015), and a rapid implementation was possible thanks to a public domain implementation developed by (Ha, 2015). A SIGMA LOGNORMAL MODEL The Sigma Lognormal model (Plamondon & Djioua, 2006) describes complex handwriting trajectories via the vectorial superimposition of lognormal strokes. The corresponding speed profile Λi(t) assumes a variably asymmetric "bell shape" which is described with a 3 parameter lognormal function Λi(t) = − 1 σi √ 2π(t− t0i) exp ( (ln(t− t0i)− µi)2 2σi2 ) (1) where t0i defines the activation time of a stroke and the parameters µi and σi determine the shape of the lognormal function. µi is referred to as log-time delay and is biologically interpreted as the rapidity of the neuromuscular system to react to an impulse generated by the central nervous system (Plamondon et al., 2003); σi is referred to as log-response time and determines the spread and asymmetry of the lognormal. The curvilinear evolution of strokes is described with a circular arc shape, which results in φi(t) = θi + θi [ 1 + erf ( ln(t− t0i)− µi σi √ 2 )] , (2) where θi is the central angle of the circular arc that defines the shape of the ith stroke. The planar evolution of a trajectory is defined by a sequence of virtual targets {vi}i=mi=1 , where a trajectory with m virtual targets will be characterised by m− 1 circular arc strokes. A ΣΛ trajectory, parameterised by the virtual target positions, is given by ξ(t) = v1 + ∫ t 0 dτΛi(τ) m−1∑ i=1 Φi(τ) (vi+1 − vi), (3) with Φi(t) = [ h(θi)cosφi(t) −h(θi)sinφi(t) h(θi)sinφi(t) −h(θi)cosφi(t) ] , and h(θi) = { 2θi 2sinθi if |sinθi| > 0, 1 otherwise, (4) which scales the extent of the stroke based on the ratio between the perimeter and the chord length of the circular arc. Intermediate parameterisation. In order to facilitate the precise specification of timing and profile shape of each stroke, we recur to an intermediate parametrisation that takes advantage of a few known properties of the lognormal (Djioua & Plamondon, 2008b) in order to define each stroke with (i) a time offset ∆ti with respect to the previous stroke, (ii) a stroke duration Ti and (iii) a shape parameter αi, which defines the skewedness of the lognormal. The corresponding ΣΛ parameters {t0i, µi, σi} can be then computed with: σi = ln(1 + αi), (5) µi = −ln ( −e −3σi − e3σi Ti ) , (6) and t0i = t1i − eµ−3σ t1i = t1(i−1) + ∆ti t1(0) = 0, (7) where t1i is the onset time of the lognormal stroke profile. As α approaches 0, the shape of the lognormal converges to a Gaussian, with mean t1 + eµ−σ 2 (the mode of the lognormal) and standard deviation d6 . B RECONSTRUCTING ΣΛ PARAMETERS FROM AN ONLINE DATASET The ΣΛ parameter reconstruction method operates on a input contour uniformly sampled at a fixed distance which is defined depending on the extent of the input, where we denote the kth sampled point along the input with p[k]. The input contour is then segmented in correspondence with perceptually salient key points, which correspond with loci of curvature extrema modulated by neighbouring contour segments (Brault & Plamondon, 1993; Berio & Leymarie, 2015). The proposed approach shares strong similarities with previous work done for (i) compressing online handwriting data with a circular-arc based segmentation (Li et al., 1998) and (ii) for generating synthetic data for handwriting recognisers (Varga et al., 2005). The parameter reconstruction algorithm can be summarised with the following steps: • Find m key-points in the input contour. • Fit a circular arc to each contour segment defined between two consecutive key-points (defining individual strokes), and obtain an estimate of each curvature parameter θi. • For each stroke compute the corresponding ∆ti parameter by analysing the curvature signal in the region of the corresponding key-point. • Define an initial sequence of virtual targets with m positions corresponding with each input key-point. • Repeat the following until convergence or until a maximum number of iterations is reached Berio & Leymarie (2015): – Integrate the ΣΛ trajectory with the current parameter estimate. – Identify m key-points in the generated trajectory. – Move the virtual target positions to minimise the distance between the key-points of the generated trajectory and the key-points on the input contour. The details for each step are highlighted in the following paragraphs. Estimating input key-points. Finding significant curvature extrema (which can be counted as convex and concave features for a closed/solid shape) is an active area of research, as relying on discrete curvature measurements remains challenging. We currently rely on a method described by Feldman & Singh (2005), and supported experimentally by De Winter & Wagemans (2008): first we measure the turning angle at each position of the input p[k] and then compute a smooth version of the signal by convolving it with a Hanning window. We assume that the turning angles have been generated by a random process with a Von Mises distribution with mean at 0 degrees, which corresponds with giving maximum probability to a straight line. We then measure the surprisal (i.e. the negative logarithm of the probability) for each sample as defined by Feldman & Singh (2005), which normalised to the [0, 1] range simplifies to: 1− cos(θ[k]), (8) where θ[k] is the (smoothed) turning angle. The first and last sample indices of the surprisal signal together with its local maxima results in m key-point indices {ẑi}. The corresponding key-points along the input contour are then given by {p [ẑi]}. Estimating stroke curvature parameters. For each section of the input contour defined between two consecutive key-points, we estimate the corresponding stroke curvature parameter θi by first computing a least square fit of a circle to the contour section. We then compute the internal angle of the arc supported between the two key-points, which is equal 2θi, i.e. two times the corresponding curvature parameter θi. Estimating stroke time-overlap parameters. This step is based on the observation that a smaller values of ∆t0i, i.e. a greater time overlap between strokes, result in smoother trajectories. On the contrary, a sufficiently large value of ∆t0i will result in a sharp corner in proximity of the corresponding virtual target. We exploit this notion, and compute an estimate of the ∆t0i parameters by examining the sharpness of the input contour in the region of each key-point. To do so we examine the previously computed turning angle surprisal signal, in which we can observe that sharp corners in the contour correspond with sharper peaks, while smoother corners correspond with smooth peaks with a larger spread. By treating the surprisal signal as a probability density function, we can then use statistical methods to measure the shape of each peak with a mixture of parametric distributions, and examine the shape of each mixture component in order to get an estimate of the corresponding sharpness along the input contour. To do so we employ a variant of Expectation Maximisation (EM) (Dempster et al., 1977) in which we treat the distance along the contour as a random variable weighted by the corresponding signal amplitude normalised to the [0, 1] range. Once the EM algorithm has converged, we treat each mixture component as a radial basis function (RBF) centred at the corresponding mean, and use linear regression as in Radial Basis Function Networks (Stulp & Sigaud, 2015) to fit the mixture parameters to the original signal (Calinon, 2016). Finally we generate an estimate of sharpness λi (bounded in the [0, 1] range) for each key point using as a logarithmic function of the mixture parameters and weights. The corresponding ∆t0i parameters are then given by ∆ti = ∆tmin + (∆tmax −∆tmin)λi , (9) where ∆tmin and ∆tmax are user specified parameters that determine the range of the ∆t0i estimates. Note that we currently utilise an empirically defined function for this task. But in future steps, we intend to learn the mapping between sharpness and mixture component parameters from synthetically samples generated with the ΣΛ model (for which ∆t0i, and consequently λi, are known). Iteratively estimating virtual target positions. The loci along the input contour corresponding with the estimated key-points provide an initial estimate for a sequence of virtual targets, where each virtual target position is given by vi = p[ẑi]. Due to the trajectory-smoothing effect produced by the time overlaps, the initial estimate will result in a generated trajectory that is likely to have a reduced scale with respect to the input we wish to reconstruct (Varga et al., 2005). In order to produce a more accurate reconstruction, we use an iterative method that shifts each virtual target towards a position that will minimise the error between the generated trajectory and the reconstructed input. To do so, we compute an estimate of m output key-points {ξ (zi)} in the generated trajectory, where z2, ..., zm are the time occurrences at which the influence of one stroke exceeds the previous. These will correspond with salient points along the trajectory (extrema of curvature) and can be easily computed by finding the time occurrence at which two consecutive lognormals intersect. Similarly to the input key-point case, ξ(z1) and ξ(zm) respectively denote the first and last points of the generated trajectory. We then iteratively adjust the virtual target positions in order to move each generated key-point ξ(zi) towards the corresponding input key-point p[ẑi] with: vi ← vi + p [ẑi]− ξ (zi) , (10) The iteration continues until the Mean Square Error (MSE) of the distances between every pair p [ẑi] and ξ(zi) is less than an experimentally set threshold or until a maximum number of iterations is reached (Fig. 16). This method usually converges to a good reconstruction of the input within few iterations (usually < 5). Interestingly, even though the dynamic information of the input is discarded, the reconstructed velocity profile is often similar to the original (in number of peaks and shape), which can be explained by the extensively studied relationships between geometry and dynamics of movement trajectories (Viviani & Terzuolo, 1982; Lacquaniti et al., 1983; Viviani & Schneider, 1991; Flash & Handzel, 2007). C RMDN MODEL DETAILS In order to increase the expressive generative capabilities of our networks, we train them to model parametric probability distributions. Specifically, we use Recurrent Mixture Density Networks that output the parameters of a bivariate Gaussian Mixture Model. C.1 BIVARIATE RECURRENT MIXTURE DENSITY NETWORK If a target variable zt can be expressed as a bivariate GMM, then for K Gaussians we can use a network architecture with output dimensions of 6K. This output vector would then consist of (µ̂t ∈ IR2K , σ̂t ∈ IR2K , ρ̂t ∈ IRK , π̂t ∈ IRK), which we use to calculate the parameters of the GMM via (Graves, 2013) µkt = µ̂ k t : means for k’th Gaussian, µ k t ∈ IR 2 σkt = exp(σ̂ k t ) : standard deviations for k’th Gaussian, σ k t ∈ IR 2 ρkt = tanh(ρ̂ k t ) : correlations for k’th Gaussian, ρ k t ∈ (−1, 1) πkt = softmax(π̂ k t ) : mixture weight for k’th Gaussian , K∑ k πkt = 1 (11) We can then formulate the probability distribution function Pt at timestep t as Pt = K∑ k πktN(zt | µkt ,σkt , ρkt ), where (12) N (x | µ,σ, ρ) = 1 2πσ1σ2 √ 1− ρ2 exp [ − Z 2(1− ρ2) ] , and (13) Z = (x1 − µ1)2 σ21 + (x2 − µ2)2 σ22 − 2ρ(x1 − µ2)(x2 − µ2) σ1σ2 (14) C.2 TRAINING OBJECTIVE If we let θ denote the parameters of a network, and given a training set S of input-target pairs (x ∈X, ŷ ∈ Ŷ ), our training objective is to find the set of parameters θML which has the maximum likelihood (ML). This is the θ that maximises the probability of training set S and is formulated as (Graves, 2008) θML = arg max θ Pr(S | θ) (15) = arg max θ S∏ (x,ŷ) Pr(ŷ | x,θ). (16) Since the logarithm is a monotonic function, a common method for maximizing this likelihood is minimizing its negative logarithm, also known as the Negative Log Likelihood (NLL), Hamiltonian or surprisal (Lin & Tegmark, 2016). We can then define our cost function J as J = − ln S∏ (x,ŷ) Pr(ŷ | x,θ) (17) = − S∑ (x,ŷ) ln Pr(ŷ | x,θ). (18) For a bivariate RMDN, the objective function can be formulated by substituting eqn. (12) in place of Pr(ŷ | x,θ) in eqn. (18). C.3 V2V MODEL Input At each timestep i, the input to the V2V model is xi ∈ IR3, where the first two elements are given by ∆vi (the relative position displacement for the i’th stroke, i.e. between the i’th virtual target and the next), and the last element is ui ∈ {0, 1} (the pen-up state during the same stroke). Given input xi and its current internal state (ci,hi), the network learns to predict xi+1, by learning the parameters for the Probability Density Function (PDF) : Pr(xi+1 |xi, ci,hi). With a slight abuse of notation, this can be expressed more intuitively as Pr(xi+1 | xi,xi−1, ...,xi−n) where n is the maximum sequence length. Output We express the predicted probability of ∆vi as a bivariate GMM as described in Section C.1, and ui as a Bernoulli distribution. Thus for K Gaussians the network has output dimensions of (6K + 1) which, in addition to eqn. (11), contains êi which we use to calculate the pen state probability via (Graves, 2013) ei = 1 1 + exp(êi) , ei ∈ (0, 1) (19) Architecture We use Long Short-Term Memory (Hochreiter & Schmidhuber, 1997) networks with input, output and forget gates (Gers et al., 2000), and we use Dropout regularization as described by Pham et al. (2014). We employ both a grid search and a random search (Bergstra & Bengio, 2012) on various hyperparameters in the ranges: sequence length {64, 128}, number of hidden recurrent layers {1, 2, 3}, dimensions per hidden layer {64, 128, 256, 400, 512, 900, 1024}, number of Gaussians {5, 10, 20}, dropout keep probability {50%, 70%, 80%, 90%, 95%} and peepholes {with, without}. For comparison we also tried a deterministic architecture whereby instead of outputing a probability distribution, the network outputs a direct prediction for xi+1. As expected, the network was unable to learn this function, and all sequence of virtual targets synthesized with this method simply travel in a repeating zig-zag line. Training We use a form of Truncated Backpropagation Through Time (BPTT) (Sutskever, 2013) whereby we segment long sequences into overlapping segments of maximum length n. In this case long-term dependencies greater than length n are lost, however with enough overlap the network can effectively learn a sliding window of length n timesteps. We shuffle our training data and reset the internal state after each sequence. We empirically found an overlap factor of 50% to perform well, though further studies are needed to confirm the sensitivity of this figure. We use dynamic unrolling of the RNN, whereby the number of timesteps to unroll to is not set at compile time, in the architecture of the network, but unrolled dynamically while training, allowing variable length sequences. We also experimented with repeating sequences which were shorter than the maximum sequence length n, to complete them to length n. We found that for our case they produced desirable results, with some side-effects which we discuss in later sections. We split our dataset into training: 70%, validation: 20% and test:10% and use the Adam optimizer (Kingma & Ba, 2014) with the recommended hyperparameters. To prevent exploding gradients we clip gradients by their global L2 norm as described in (Pascanu et al., 2013). We tried thresholds of both 5 and 10, and found 5 to provide more stability. We formulate the loss function J to minimise the Negative Log Likelihood as described in Section C.2 using the probability density functions described in eqn. (12) and eqn. (19). C.4 V2D MODEL Input The input to this network at each timestep i is identical to that of the V2V-model, xi ∈ IR3, where the first two elements are ∆vi (normalised relative position displacement for the i’th stroke), and ui ∈ {0, 1} (the pen state during the same stroke). Given input xi and its current internal state (ci,hi), the network learns to predict the dynamic parameters (∆t0i, θi) for the current stroke i, by learning the parameters for Pr(∆t0i, θi | xi, ci,hi). Again with an abuse of notation, this can be expressed more intuitively as Pr(∆t0i, θi | xi,xi−1, ...,xi−n) where n is the maximum sequence length. Output We express the predicted probability of the dynamic parameters (∆t0i, θi) as a bivariate GMM as described in Section C.1. Architecture We explored very similar architecture and hyperparamereters as the V2V-model, but found that we achieved much better results with a shorter maximum sequence length. We trained a number of models with a variety of sequence lengths {3, ..., 8, 13, 16, 21, 32}. Training We use the same procedure for training as the V2V-model. C.5 A2D MODEL Input The input to this network xi ∈ IR5 at each timestep i is slightly different to the V2V and V2D models. Similar to the V2V and V2D models, the first two elements are ∆vi (normalised relative position displacement for the i’th stroke), and the third element is ui ∈ {0, 1} (the pen state during the same stroke). However in this case the final two elements are the dynamic parameters for the previous stroke (∆t0i−1, θi−1), normalized to zero mean and unit standard deviation. Given input xi and its current internal state (ci,hi), the network learns to predict the dynamic parameters (∆t0i, θi) for the current stroke i, by learning the parameters for Pr(∆t0i, θi |xi, ci,hi). Again with an abuse of notation, this can be expressed more intuitively as Pr(∆t0i, θi | xi,xi−1, ...,xi−n) where n is the maximum sequence length. Output The output of this network is identical to that of the V2D model. Architecture We explored very similar architecture and hyperparamereters as the V2D model. Training We use the same procedure for training as the V2V-model. C.6 MODEL SELECTION We evaluated and batch rendered the outputs of many different architectures and models at different training epochs, and settled on models which were amongst those with the lowest validation error, but also produced visibily more desirable results. Once we picked the models, the results displayed are not cherry picked. The preprocessed IAM dataset contains 12087 samples (8460 in the training set) with maximum sequence length 305, minimum 6, median 103 and mean 103.9. For the V2V/V2D/A2V models trained on the IAM database we settle on an architecture of 3 recurrent layers, each with size 512, a maximum sequence length of 128, 20 Gaussians, dropout keep probability of 80% and no peepholes. For the augmented one-shot learning models we used similar architectures, but found that 2 recurrent layers each with size 256 was able to generalise better and produce more interesting results that both captured the prime inputs without overfitting. For V2V we used L2 normalisation on ∆vi input, and for A2D/V2D we used We also tried a number of different methods for normalising and representing ∆vi on the input to the models. We first tried normalising the components individually to have zero mean and unit standard deviation. We also tried normalising uniformly on L2 norm again to have zero mean and unit standard deviation. Finally, we tried normalised polar coordinates, both absolute and relative.
1. What is the main contribution of the paper, and does it lack novelty in its algorithmic approach? 2. Are there any numerical evaluations or experiments presented in the paper to support its claims? 3. Is the paper's focus on handwriting recognition, and would it be more suitable for a different conference?
Review
Review This paper has no machine learning algorithmic contribution: it just uses the the same combination of LSTM and bivariate mixture density network as Graves, and the detailed explanation in the appendix even misses one key essential point: how are the Gaussian parameters obtained as a transformation of the output of the LSTM. There are also no numerical evaluation suggesting that the algorithm is some form of improvement over the state-of-the-art. So I do not think such a paper is appropriate for a conference like ICLR. The part describing the handwriting tasks and the data transformation is well written and interesting to read, it could be valuable work for a conference more focused on handwriting recognition, but I am no expert in the field.
ICLR
Title Sequence generation with a physiologically plausible model of handwriting and Recurrent Mixture Density Networks Abstract The purpose of this study is to explore the feasibility and potential benefits of using a physiological plausible model of handwriting as a feature representation for sequence generation with recurrent mixture density networks. We build on recent results in handwriting prediction developed by Graves (2013), and we focus on generating sequences that possess the statistical and dynamic qualities of handwriting and calligraphic art forms. Rather than model raw sequence data, we first preprocess and reconstruct the input training data with a concise representation given by a motor plan (in the form of a coarse sequence of ‘ballistic’ targets) and corresponding dynamic parameters (which define the velocity and curvature of the pen-tip trajectory). This representation provides a number of advantages, such as enabling the system to learn from very few examples by introducing artificial variability in the training data, and mixing of visual and dynamic qualities learned from different datasets. 1 INTRODUCTION Recent results (Graves, 2013) have demonstrated that, given a sufficiently large training data-set, Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) Recurrent Mixture Density Networks (RMDNs) (Schuster, 1999) are capable of learning and generating convincing synthetic handwriting sequences. In this study we explore a similar network architecture combined with an intermediate feature representation, given by the parameters of a physiologically plausible model of handwriting: the Sigma Lognormal model (Plamondon, 1995; Plamondon et al., 2014). In the work by Graves (2013) and subsequent derivations, the RMDN operates on raw sequences of points recorded with a digitizing device. In our approach we preprocess the training data using an intermediate representation that describes a form of “motor program” coupled with a sequence of dynamic parameters that describe the evolution of the pen tip. By doing so, we use a representation that is more concise (i.e. lower in dimensionality), meaningful (i.e. every data point is a high level segment descriptor of the trajectory), and is resolution independent. This project stems from the observation that human handwriting results from the orchestration of a large number of motor and neural subsystems, and is ultimately produced with the execution of complex and skillful motions. As such we seek a representation that abstracts the complex task of trajectory formation from the neural network, which is then rather focused on a higher level task of movement planning. Note that for the scope of this study, we do not implement text-to-handwriting synthesis (Graves, 2013), but rather focus on the task of generating sequences that possess the statistical and dynamic qualities of handwriting, which can be expanded to calligraphy, asemic handwriting, drawings and graffiti (Berio & Leymarie, 2015; Berio et al., 2016)). In particular, we focus on two distinct tasks: (1) learning and generating motor plans and (2) given a motor plan, 1Goldsmiths College, University of London. Department of Computing. 2École Polytechnique de Montréal, Canada. predicting the corresponding dynamic parameters that determine the visual and dynamic qualities of the pen trace. We then go on to show that this modular workflow can be exploited in ways such as: mixing of dynamic qualities between data-sets (a form of handwriting “style transfer” ) as well as learning from small datasets (a form of “one shot learning”). The remainder of this paper is organised as follows: in Section 2, after briefly summarising the background context, we then briefly describe the Sigma Lognormal model and RMDNs; in Section 3 we present the data preprocessing step and the RMDN models that build up our system; in Section 4 we propose various applications of the system, including learning handwriting representations from small datasets and mixing styles. 2 BACKGROUND Our study is grounded on a number of notions and principles that have been observed in the general study of human movement as well as in the handwriting synthesis/analysis field (known as Graphonomics (Kao et al., 1986)). The speed profile of aiming movements is typically characterised by a “bell shape” that is variably skewed depending on the rapidity of the movement (Lestienne, 1979; Nagasaki, 1989; Plamondon et al., 2013). Complex movements can be described by the superimposition of a discrete number of “ballistic” units of motion, which in turn can each be represented by the classic bell shaped velocity profile and are often referred to as strokes. A number of methods synthesise handwriting through the temporal superimposition of strokes, the velocity profile of which is modelled with a variety of functions including sinusoidal functions (Morasso & Mussa Ivaldi, 1982; Maarse, 1987; Rosenbaum et al., 1995), Beta functions (Lee & Cho, 1998; Bezine et al., 2004), and lognormals (Plamondon et al., 2009). In this study we rely on a family of models known as the Kinematic Theory of Rapid Human Movements, that has been developed by Plamondon et al. in an extensive body of work since the 1990’s (Plamondon, 1995; Plamondon et al., 2014). Plamondon et al. (2003) show that if we consider that a movement is the result of the parallel and hierarchical interaction of a large number of coupled linear systems, the impulse response of such a system to a centrally generated command asymptotically converges to a lognormal function. This assumption is attractive from a modelling perspective because it abstracts the high complexity of the neuromuscular system in charge of generating movements with a relatively simple mathematical model which further provides state of the art reconstruction of human velocity data (Rohrer & Hogan, 2006; Plamondon et al., 2013). A number of methods have used neural inspired approaches for the task of handwriting trajectory formation (Schomaker, 1992; Bullock et al., 1993; Wada & Kawato, 1993). Similarly to our proposed method, Ltaief et al. (2012) train a neural on a preprocessed dataset where the raw input data is reconstructed in the form of handwriting model parameters. Nair & Hinton (2005) use a sequence of neural networks to learn the motion of two orthogonal mass spring systems from images of handwritten digits for classification purposes. With a similar motivation to ours, Plamondon & Privitera (1996) use a Self Organising Map (SOM) to learn a sequence of ballistic targets, which describe a coarse motor plan of handwriting trajectories. Our method builds in particular on the work of Graves (2013), who describes a system that uses a recurrent mixture density networks (RMDNs) (Bishop, 1994) extended with a LSTM architecture (Hochreiter & Schmidhuber, 1997), to generate synthetic handwriting in a variety of styles. 2.1 SIGMA LOGNORMAL MODEL On the basis of Plamondon’s Kinematic Theory (Plamondon, 1995), the Sigma Lognormal (ΣΛ) model (Plamondon & Djioua, 2006) describes complex handwriting trajectories via the vectorial superimposition of a discrete number of strokes. With the assumption that curved handwriting movements are done by rotating the wrist, the curvilinear evolution of strokes is described with a circular arc shape. Each stroke is charactersied by a variably assymmetric "bell shape" speed profile, which is described with a (3 parameter) lognormal function. The planar evolution of a trajectory is then described by a sequence of virtual targets {vi}i=mi=1 , which define “imaginary” (i.e. not necessarily located along the generated trajectory) loci at which each consecutive stroke is aimed. The virtual targets provide a low level description of the motor plan for the handwriting trajectory. A smooth trajectory is then generated by integrating the velocity of each stroke over time. The trajectory smoothness can be defined by adjusting the activation-time offset of a given stroke with respect to the previous stroke, which is denoted with ∆t0i; a smaller time offset (i.e. a greater overlap between lognormal components) will result in a smoother trajectory (Fig. 1, c). The curvature of the trajectory can be varied by adjusting the central angle of each circular arc, which is denoted with θi. Equations and further details for the ΣΛ model can be found in Appendix A. A sequence of virtual targets provides a very sparse spatial description or “motor plan” for the trajectory evolution. The remaining stroke parameters, ∆t0i and θi, define the temporal, dynamic and geometric features of the trajectory and we refer to those as dynamic parameters. 2.2 RECURRENT MIXTURE DENSITY NETWORKS Mixture Density Networks (MDN) were introduced by Bishop (1994) in order to model and predict the parameters of a Gaussian Mixture Model (GMM), i.e. a set of means, covariances and mixture weights. Schuster (1999) showed that MDNs could be to model temporal data using RNNs. The author used Recurrent Mixture Density Networks (RMDN) to model the statistical properties of speech, and they were found to be more successful than traditional GMMs. Graves (2013) used LSTM RMDNs to model and synthesise online handwriting, providing the basis for extensions to the method, also used in Ha et al. (2016); Zhang et al. (2016). Note that in the case of a sequential model, the RMDN outputs a unique set of GMM parameters for each timestep t, allowing the probability distribution to change with time as the input sequence develops. Further details can be found in Appendix C.1. 3 METHOD We operate on discrete and temporally ordered sequences of planar coordinates. Similarly to Graves (2013), most of our results come from experiments made on the IAM online handwriting database (Marti & Bunke, 2002). However, we have made preliminary experiments with other datasets, such as the Graffiti Analysis Database (Lab, 2009) as well as limited samples collected in our laboratory from a user with a digitiser tablet. As a first step, we preprocess the raw data and reconstruct it in the form of ΣΛ model parameters Section 3.1. We then train and evaluate a number of RMDN models for two distinct tasks: 1. Virtual target prediction. We use the V2V-model for this task. Given a sequence of virtual targets, this model predicts the next virtual target. 2. Dynamic parameter prediction. For this task we trained and compared two model architectures. Given a sequence of virtual targets, the task of these models is to predict the corresponding dynamic parameters. The V2D-model is condititioned only on the previous virtual targets, whereas the A2D-model is conditioned on both the previous virtual targets and dynamic parameters. We then exploit the modularity of this system to conduct various experiments, details of which can found in Section 4. 3.1 PREPROCESSING: RECONSTRUCTING ΣΛ PARAMETERS A number of methods have been developed by Plamondon et. al in order to reconstruct ΣΛ-model parameters from digitised pen input data (O’Reilly & Plamondon, 2008; Plamondon et al., 2014; Fischer et al., 2014). These methods provide the ideal reconstruction of model parameters, given a high resolution digitised pen trace. While such methods are superior for handwriting analysis and biometric purposes, we opt for a less precise method (Berio & Leymarie, 2015) that is less sensitive to sampling quality and is aimed at generating virtual target sequences that remain perceptually similar to the original trace. We purposely choose to ignore the original dynamics of the input, and base the method on a geometric input data only. This is done in order to work with training sequences that are independent of sampling rate, and in sight of future developments in which we intend to extract handwriting traces from bitmaps, inferring causal/dynamic information from a static input as humans are capable of (Edelman & Flash, 1987; Freedberg & Gallese, 2007). Our method operates on a uniformly sampled input contour, which is then segmented in correspondence with perceptually salient key points: loci of curvature extrema modulated by neighbouring contour segments (Brault & Plamondon, 1993; Berio & Leymarie, 2015), which gives an initial estimate of each virtual target vi. We then (i) fit a circular arc to each contour segment in order to estimate the θi parameters and (ii) estimate the ∆t0i parameters by analysing the contour curvature in the region of each key point. Finally, (iii) we iteratively adjust the virtual target positions to minimise the error between the original trajectory and the one generated by the corresponding ΣΛ parameters. For Further details on the ΣΛ parameter reconstruction method, the reader is referred to Appendix B. 3.2 DATA AUGMENTATION We can exploit the ΣΛ parameterisation to generate many variations over a single trajectory, which are visually consistent with the original, and with a variability that is similar to the one that would be seen in multiple instances of handwriting made by the same writer (Fig. 3) (Djioua & Plamondon, 2008a; Fischer et al., 2014; Berio & Leymarie, 2015). Given a dataset of n training samples, we randomly perturb the virtual target positions and dynamic parameters of each sample np times, which results in a new augmented dataset of size n+ n× np where legibility and trajectory smoothness is maintained across samples. This would not be possible on the raw online dataset, as perturbations for each data-point would eventually result in a noisy trajectory. 3.3 PREDICTING VIRTUAL TARGETS WITH THE V2V-MODEL The V2V-model is conditioned on a history of virtual targets and given a new virtual target it predicts the next virtual target (hence the name V2V). Note that each virtual target includes the corresponding pen state — up (not touching the paper) or down (touching the paper). Repeatedly feeding the predicted virtual target back into the model at every timestep allows the model to synthesise sequences of arbitrary length. The implementation of this model is very similar to the handwriting prediction demonstrated by Graves (2013), although instead of operating directly on the digitised pen positions, we operate on the much coarser virtual target sequences which are extracted during the preprocessing step. The details of this model can be found in Appendix C.3 3.4 PREDICTING DYNAMIC PARAMETERS WITH THE V2D AND A2D MODELS The goal of these models is to predict the corresponding dynamic parameters (∆t0i, θi) for a given sequence of virtual targets. We train and compare two model architectures for this task. The V2Dmodel is conditioned on the history of virtual targets, and given a new virtual target, this model predicts the corresponding dynamic parameters (∆t0i, θi) for the current stroke (hence the name V2D). Running this model incrementally for every stroke of a given virtual target sequence allows us to predict dynamic parameters for each stroke. The implementation of this model is very similar to the V2V-model, and details can be found in Appendix C.4. At each timestep, the V2D model outputs and maintains internal memory of a probability distribution for the predicted dynamic parameters. However, the network has no knowledge of the parameters that are sampled and used. Hence, dynamic parameters might not be consistent across timesteps. This problem can be overcome by feeding the sampled dynamic parameters back into the model at the next timestep. From a human motor planning perspective this makes sense as, for a given drawing style, when we decide the curvature and smoothness of a stroke we will take into consideration the choices made in previously executed strokes. The A2D model predicts the corresponding dynamic parameters (∆t0i, θi) for the current stroke conditioned on a history of both virtual targets and dynamic parameters (i.e. all ΣΛ parameters - hence the name A2D). We use this model in a similar way to the V2D model, whereby we run it incrementally for every stroke of a given virtual target sequence. However, internally, at every timestep the predicted dynamic parameters are fed back into the model at the next timestep along with the virtual target from the given sequence. The details of this implementation can be found in Appendix C.5. 4 EXPERIMENTS AND RESULTS Predicting Virtual Targets. In a first experiment we use the V2V model, trained on the preprocessed IAM dataset, to predict sequences of virtual targets. We prime the network by first feeding it a sequence from the test dataset. This conditions the network to predict sequences that are similar to the prime. We can see from the results (Fig. 4) that the network is indeed able to produce sequences that capture the statistical qualities of the priming sequence, such as overall incline, proportions, and oscillation frequency. On the other hand, we observe that amongst the generated sequences, there are often patterns which do not represent recognisable letters or words. This can be explained by the high variability of samples contained in the IAM dataset, and by the fact that our representation is very concise, with each data-point containing high significance. As a result, the slightest variation in a prediction is likely to cause a large error in the next. To overcome this problem, we train a new model with a dataset augmented with 10× variations as described in Section 3.2. Due to our limited computing resources 1, we test this method on 1/10th of the dataset, which results in a new dataset with the same size as the original, but with a lower number of handwriting specimens with a number of subtle variations per specimen. With this approach, the network predictions maintain statistical similarity with the priming sequences, and patterns emerge that are more evocative of letters of the alphabet or whole words, with fewer unrecognizable patterns (Fig. 4). To validate this result, we also test the model’s performance training it on 1/10th of the dataset, without data augmentation, and the results are clearly inferior to the previous two models. This suggests that the data augmentation step is highly beneficial to the performance of the network. 1We are thus not able to thoroughly test the large network architectures that would be necessary to train on the whole augmented dataset. Predicting Dynamic Parameters. We first evaluate the performance of both the V2D and A2D models on virtual targets extracted from the test set. Remarkably, although the networks have not been trained on these sequences, both models predict dynamic parameters that result in trajectories that are readable, and are often similar to the target sample. We settle on the A2D model trained on a 3× augmented dataset, which we qualitatively assess to produce the best results (Fig. 5). We then proceed with applying the same A2D model on virtual targets generated by the V2V models primed on the test set. We observe that the predictions on sequences generated with the augmented dataset are highly evocative of handwriting and are clearly different depending on the priming sequence (Fig. 6, c), while the predictions made with the non-augmented dataset are more likely to resemble random scribbles rather than human readable handwriting (Fig. 6, b). This further confirms the utility of the data augmentation step. User defined virtual targets. The dynamic parameter prediction models can also be used in combination with user defined virtual target sequences (Fig. 7). Such a method can be used to quickly and interactively generate handwriting trajectories in a given style, by a simple point and click procedure. The style (in terms of curvature and dynamics) of the generated trajectory is determined by the data used to train the A2D model, and by priming the A2D model with different samples, we can apply different styles to the user defined virtual targets. One shot learning. In a subsequent experiment, we apply the data augmentaion method described in Section 3.2 to enable both virtual target and dynamic prediction models to learn from a small dataset of calligraphic samples recorded by a user using a digitiser tablet. We observe that with a low number of augmentations (50×) the models generate quasi-random outputs, and seem to learn only the left to right trend of the input. With higher augmentation (700×), the system generates outputs that are consistent to the human eye with the input data (Fig. 8). We also train our models using only a single sample (augmented 7000×) and again observe that the model is able to reproduce novel sequences that are similar to the input sample (Fig. 9). Naturally, the output is a form of recombination of the input, but this is sufficient to synthesise novel outputs that are qualitatively similar to the input. It should be noted that we are judging the performance of the one-shot learned models qualitatively, and we may not be testing the full limits of how well the models are able to generalise. On the other hand, these results, as well as the “style transfer” capabilities exposed in following section suggest a certain degree of generalisation. Style Transfer. Here, with a slight abuse of terminology, we utilise the term "style" to refer to the dynamic and geometric features (such as pen-tip acceleration and curvature) that determine the visual qualities of a handwriting trajectory. Given a sequence of virtual targets generated with the V2V model trained on one dataset, we can also predict the corresponding dynamic parameters with the A2D model trained on another. The result is an output that is similar to one dataset in lettering structure, but possesses the fine dynamic and geometric features of the other. If we visually inspect Fig. 10, we can see that both the sequence of virtual targets reconstructed by the dataset preprocessing method, and the trajectory generated over the same sequence of virtual targets with dynamic parameters learned from a different datasets, are both readable. This emphasises the importance of using perceptually salient points along the input for estimating key-points in the data-set preprocessing step (Section 3.1). Furthermore, we can perform the same type of operation within a single dataset, by priming the A2D model with the dynamic parameters of a particular training example, while feeding it with the virtual targets of another. To test this we train both (V2V, A2D) models on a corpus containing 5 samples of the same sentence written in different styles and then augmented 1400× (Fig. 11). We envision the utility of such as system in combination with virtual targets interactively specified by a user. 5 CONCLUSIONS AND FUTURE WORK We have presented a system that is able to learn the parameters for a physiologically plausible model of handwriting from an online dataset. We hypothesise that such a movement centric approach is advantageous as a feature representation for a number of reasons. Using such a representation provides a performance that is similar to the handwriting prediction demonstrated by Graves (2013) and Ha et al. (2016), with a number of additional benefits. These include the ability to: (i) capture both the geometry and dynamics of a hand drawn/written trace with a single representation, (ii) express the variability of different types of movement concisely at the feature level, (iii) demonstrate greater flexibility for procedural manipulations of the output, (iv) mix “styles” (applying curvature and dynamic properties from one example, to the motor plan of another), (v) learn a generative model from a small number of samples (n < 5), (vi) generate resolution independent outputs. The reported work provides a solid basis for a number of different future research avenues. As a first extension, we plan to implement the label/text input alignment method described in Graves’ original work that should allow us to synthesise readable handwritten text and also to provide a more thorough comparison of the two methods. Our method strongly relies on an accurate reconstruction of the input in the preprocessing step. Improvements should target especially parts of the latter method that depend on user tuned parameters, such as the identification of salient points along the input (which requires a final peak detection pass), and measuring the sharpness of the input in correspondence with salient points. ACKNOWLEDGMENTS The system takes as a starting point the original work developed by (Graves, 2013). We use Tensorflow, the open-source software library for numerical computation and deep learning (Abadi et al., 2015), and a rapid implementation was possible thanks to a public domain implementation developed by (Ha, 2015). A SIGMA LOGNORMAL MODEL The Sigma Lognormal model (Plamondon & Djioua, 2006) describes complex handwriting trajectories via the vectorial superimposition of lognormal strokes. The corresponding speed profile Λi(t) assumes a variably asymmetric "bell shape" which is described with a 3 parameter lognormal function Λi(t) = − 1 σi √ 2π(t− t0i) exp ( (ln(t− t0i)− µi)2 2σi2 ) (1) where t0i defines the activation time of a stroke and the parameters µi and σi determine the shape of the lognormal function. µi is referred to as log-time delay and is biologically interpreted as the rapidity of the neuromuscular system to react to an impulse generated by the central nervous system (Plamondon et al., 2003); σi is referred to as log-response time and determines the spread and asymmetry of the lognormal. The curvilinear evolution of strokes is described with a circular arc shape, which results in φi(t) = θi + θi [ 1 + erf ( ln(t− t0i)− µi σi √ 2 )] , (2) where θi is the central angle of the circular arc that defines the shape of the ith stroke. The planar evolution of a trajectory is defined by a sequence of virtual targets {vi}i=mi=1 , where a trajectory with m virtual targets will be characterised by m− 1 circular arc strokes. A ΣΛ trajectory, parameterised by the virtual target positions, is given by ξ(t) = v1 + ∫ t 0 dτΛi(τ) m−1∑ i=1 Φi(τ) (vi+1 − vi), (3) with Φi(t) = [ h(θi)cosφi(t) −h(θi)sinφi(t) h(θi)sinφi(t) −h(θi)cosφi(t) ] , and h(θi) = { 2θi 2sinθi if |sinθi| > 0, 1 otherwise, (4) which scales the extent of the stroke based on the ratio between the perimeter and the chord length of the circular arc. Intermediate parameterisation. In order to facilitate the precise specification of timing and profile shape of each stroke, we recur to an intermediate parametrisation that takes advantage of a few known properties of the lognormal (Djioua & Plamondon, 2008b) in order to define each stroke with (i) a time offset ∆ti with respect to the previous stroke, (ii) a stroke duration Ti and (iii) a shape parameter αi, which defines the skewedness of the lognormal. The corresponding ΣΛ parameters {t0i, µi, σi} can be then computed with: σi = ln(1 + αi), (5) µi = −ln ( −e −3σi − e3σi Ti ) , (6) and t0i = t1i − eµ−3σ t1i = t1(i−1) + ∆ti t1(0) = 0, (7) where t1i is the onset time of the lognormal stroke profile. As α approaches 0, the shape of the lognormal converges to a Gaussian, with mean t1 + eµ−σ 2 (the mode of the lognormal) and standard deviation d6 . B RECONSTRUCTING ΣΛ PARAMETERS FROM AN ONLINE DATASET The ΣΛ parameter reconstruction method operates on a input contour uniformly sampled at a fixed distance which is defined depending on the extent of the input, where we denote the kth sampled point along the input with p[k]. The input contour is then segmented in correspondence with perceptually salient key points, which correspond with loci of curvature extrema modulated by neighbouring contour segments (Brault & Plamondon, 1993; Berio & Leymarie, 2015). The proposed approach shares strong similarities with previous work done for (i) compressing online handwriting data with a circular-arc based segmentation (Li et al., 1998) and (ii) for generating synthetic data for handwriting recognisers (Varga et al., 2005). The parameter reconstruction algorithm can be summarised with the following steps: • Find m key-points in the input contour. • Fit a circular arc to each contour segment defined between two consecutive key-points (defining individual strokes), and obtain an estimate of each curvature parameter θi. • For each stroke compute the corresponding ∆ti parameter by analysing the curvature signal in the region of the corresponding key-point. • Define an initial sequence of virtual targets with m positions corresponding with each input key-point. • Repeat the following until convergence or until a maximum number of iterations is reached Berio & Leymarie (2015): – Integrate the ΣΛ trajectory with the current parameter estimate. – Identify m key-points in the generated trajectory. – Move the virtual target positions to minimise the distance between the key-points of the generated trajectory and the key-points on the input contour. The details for each step are highlighted in the following paragraphs. Estimating input key-points. Finding significant curvature extrema (which can be counted as convex and concave features for a closed/solid shape) is an active area of research, as relying on discrete curvature measurements remains challenging. We currently rely on a method described by Feldman & Singh (2005), and supported experimentally by De Winter & Wagemans (2008): first we measure the turning angle at each position of the input p[k] and then compute a smooth version of the signal by convolving it with a Hanning window. We assume that the turning angles have been generated by a random process with a Von Mises distribution with mean at 0 degrees, which corresponds with giving maximum probability to a straight line. We then measure the surprisal (i.e. the negative logarithm of the probability) for each sample as defined by Feldman & Singh (2005), which normalised to the [0, 1] range simplifies to: 1− cos(θ[k]), (8) where θ[k] is the (smoothed) turning angle. The first and last sample indices of the surprisal signal together with its local maxima results in m key-point indices {ẑi}. The corresponding key-points along the input contour are then given by {p [ẑi]}. Estimating stroke curvature parameters. For each section of the input contour defined between two consecutive key-points, we estimate the corresponding stroke curvature parameter θi by first computing a least square fit of a circle to the contour section. We then compute the internal angle of the arc supported between the two key-points, which is equal 2θi, i.e. two times the corresponding curvature parameter θi. Estimating stroke time-overlap parameters. This step is based on the observation that a smaller values of ∆t0i, i.e. a greater time overlap between strokes, result in smoother trajectories. On the contrary, a sufficiently large value of ∆t0i will result in a sharp corner in proximity of the corresponding virtual target. We exploit this notion, and compute an estimate of the ∆t0i parameters by examining the sharpness of the input contour in the region of each key-point. To do so we examine the previously computed turning angle surprisal signal, in which we can observe that sharp corners in the contour correspond with sharper peaks, while smoother corners correspond with smooth peaks with a larger spread. By treating the surprisal signal as a probability density function, we can then use statistical methods to measure the shape of each peak with a mixture of parametric distributions, and examine the shape of each mixture component in order to get an estimate of the corresponding sharpness along the input contour. To do so we employ a variant of Expectation Maximisation (EM) (Dempster et al., 1977) in which we treat the distance along the contour as a random variable weighted by the corresponding signal amplitude normalised to the [0, 1] range. Once the EM algorithm has converged, we treat each mixture component as a radial basis function (RBF) centred at the corresponding mean, and use linear regression as in Radial Basis Function Networks (Stulp & Sigaud, 2015) to fit the mixture parameters to the original signal (Calinon, 2016). Finally we generate an estimate of sharpness λi (bounded in the [0, 1] range) for each key point using as a logarithmic function of the mixture parameters and weights. The corresponding ∆t0i parameters are then given by ∆ti = ∆tmin + (∆tmax −∆tmin)λi , (9) where ∆tmin and ∆tmax are user specified parameters that determine the range of the ∆t0i estimates. Note that we currently utilise an empirically defined function for this task. But in future steps, we intend to learn the mapping between sharpness and mixture component parameters from synthetically samples generated with the ΣΛ model (for which ∆t0i, and consequently λi, are known). Iteratively estimating virtual target positions. The loci along the input contour corresponding with the estimated key-points provide an initial estimate for a sequence of virtual targets, where each virtual target position is given by vi = p[ẑi]. Due to the trajectory-smoothing effect produced by the time overlaps, the initial estimate will result in a generated trajectory that is likely to have a reduced scale with respect to the input we wish to reconstruct (Varga et al., 2005). In order to produce a more accurate reconstruction, we use an iterative method that shifts each virtual target towards a position that will minimise the error between the generated trajectory and the reconstructed input. To do so, we compute an estimate of m output key-points {ξ (zi)} in the generated trajectory, where z2, ..., zm are the time occurrences at which the influence of one stroke exceeds the previous. These will correspond with salient points along the trajectory (extrema of curvature) and can be easily computed by finding the time occurrence at which two consecutive lognormals intersect. Similarly to the input key-point case, ξ(z1) and ξ(zm) respectively denote the first and last points of the generated trajectory. We then iteratively adjust the virtual target positions in order to move each generated key-point ξ(zi) towards the corresponding input key-point p[ẑi] with: vi ← vi + p [ẑi]− ξ (zi) , (10) The iteration continues until the Mean Square Error (MSE) of the distances between every pair p [ẑi] and ξ(zi) is less than an experimentally set threshold or until a maximum number of iterations is reached (Fig. 16). This method usually converges to a good reconstruction of the input within few iterations (usually < 5). Interestingly, even though the dynamic information of the input is discarded, the reconstructed velocity profile is often similar to the original (in number of peaks and shape), which can be explained by the extensively studied relationships between geometry and dynamics of movement trajectories (Viviani & Terzuolo, 1982; Lacquaniti et al., 1983; Viviani & Schneider, 1991; Flash & Handzel, 2007). C RMDN MODEL DETAILS In order to increase the expressive generative capabilities of our networks, we train them to model parametric probability distributions. Specifically, we use Recurrent Mixture Density Networks that output the parameters of a bivariate Gaussian Mixture Model. C.1 BIVARIATE RECURRENT MIXTURE DENSITY NETWORK If a target variable zt can be expressed as a bivariate GMM, then for K Gaussians we can use a network architecture with output dimensions of 6K. This output vector would then consist of (µ̂t ∈ IR2K , σ̂t ∈ IR2K , ρ̂t ∈ IRK , π̂t ∈ IRK), which we use to calculate the parameters of the GMM via (Graves, 2013) µkt = µ̂ k t : means for k’th Gaussian, µ k t ∈ IR 2 σkt = exp(σ̂ k t ) : standard deviations for k’th Gaussian, σ k t ∈ IR 2 ρkt = tanh(ρ̂ k t ) : correlations for k’th Gaussian, ρ k t ∈ (−1, 1) πkt = softmax(π̂ k t ) : mixture weight for k’th Gaussian , K∑ k πkt = 1 (11) We can then formulate the probability distribution function Pt at timestep t as Pt = K∑ k πktN(zt | µkt ,σkt , ρkt ), where (12) N (x | µ,σ, ρ) = 1 2πσ1σ2 √ 1− ρ2 exp [ − Z 2(1− ρ2) ] , and (13) Z = (x1 − µ1)2 σ21 + (x2 − µ2)2 σ22 − 2ρ(x1 − µ2)(x2 − µ2) σ1σ2 (14) C.2 TRAINING OBJECTIVE If we let θ denote the parameters of a network, and given a training set S of input-target pairs (x ∈X, ŷ ∈ Ŷ ), our training objective is to find the set of parameters θML which has the maximum likelihood (ML). This is the θ that maximises the probability of training set S and is formulated as (Graves, 2008) θML = arg max θ Pr(S | θ) (15) = arg max θ S∏ (x,ŷ) Pr(ŷ | x,θ). (16) Since the logarithm is a monotonic function, a common method for maximizing this likelihood is minimizing its negative logarithm, also known as the Negative Log Likelihood (NLL), Hamiltonian or surprisal (Lin & Tegmark, 2016). We can then define our cost function J as J = − ln S∏ (x,ŷ) Pr(ŷ | x,θ) (17) = − S∑ (x,ŷ) ln Pr(ŷ | x,θ). (18) For a bivariate RMDN, the objective function can be formulated by substituting eqn. (12) in place of Pr(ŷ | x,θ) in eqn. (18). C.3 V2V MODEL Input At each timestep i, the input to the V2V model is xi ∈ IR3, where the first two elements are given by ∆vi (the relative position displacement for the i’th stroke, i.e. between the i’th virtual target and the next), and the last element is ui ∈ {0, 1} (the pen-up state during the same stroke). Given input xi and its current internal state (ci,hi), the network learns to predict xi+1, by learning the parameters for the Probability Density Function (PDF) : Pr(xi+1 |xi, ci,hi). With a slight abuse of notation, this can be expressed more intuitively as Pr(xi+1 | xi,xi−1, ...,xi−n) where n is the maximum sequence length. Output We express the predicted probability of ∆vi as a bivariate GMM as described in Section C.1, and ui as a Bernoulli distribution. Thus for K Gaussians the network has output dimensions of (6K + 1) which, in addition to eqn. (11), contains êi which we use to calculate the pen state probability via (Graves, 2013) ei = 1 1 + exp(êi) , ei ∈ (0, 1) (19) Architecture We use Long Short-Term Memory (Hochreiter & Schmidhuber, 1997) networks with input, output and forget gates (Gers et al., 2000), and we use Dropout regularization as described by Pham et al. (2014). We employ both a grid search and a random search (Bergstra & Bengio, 2012) on various hyperparameters in the ranges: sequence length {64, 128}, number of hidden recurrent layers {1, 2, 3}, dimensions per hidden layer {64, 128, 256, 400, 512, 900, 1024}, number of Gaussians {5, 10, 20}, dropout keep probability {50%, 70%, 80%, 90%, 95%} and peepholes {with, without}. For comparison we also tried a deterministic architecture whereby instead of outputing a probability distribution, the network outputs a direct prediction for xi+1. As expected, the network was unable to learn this function, and all sequence of virtual targets synthesized with this method simply travel in a repeating zig-zag line. Training We use a form of Truncated Backpropagation Through Time (BPTT) (Sutskever, 2013) whereby we segment long sequences into overlapping segments of maximum length n. In this case long-term dependencies greater than length n are lost, however with enough overlap the network can effectively learn a sliding window of length n timesteps. We shuffle our training data and reset the internal state after each sequence. We empirically found an overlap factor of 50% to perform well, though further studies are needed to confirm the sensitivity of this figure. We use dynamic unrolling of the RNN, whereby the number of timesteps to unroll to is not set at compile time, in the architecture of the network, but unrolled dynamically while training, allowing variable length sequences. We also experimented with repeating sequences which were shorter than the maximum sequence length n, to complete them to length n. We found that for our case they produced desirable results, with some side-effects which we discuss in later sections. We split our dataset into training: 70%, validation: 20% and test:10% and use the Adam optimizer (Kingma & Ba, 2014) with the recommended hyperparameters. To prevent exploding gradients we clip gradients by their global L2 norm as described in (Pascanu et al., 2013). We tried thresholds of both 5 and 10, and found 5 to provide more stability. We formulate the loss function J to minimise the Negative Log Likelihood as described in Section C.2 using the probability density functions described in eqn. (12) and eqn. (19). C.4 V2D MODEL Input The input to this network at each timestep i is identical to that of the V2V-model, xi ∈ IR3, where the first two elements are ∆vi (normalised relative position displacement for the i’th stroke), and ui ∈ {0, 1} (the pen state during the same stroke). Given input xi and its current internal state (ci,hi), the network learns to predict the dynamic parameters (∆t0i, θi) for the current stroke i, by learning the parameters for Pr(∆t0i, θi | xi, ci,hi). Again with an abuse of notation, this can be expressed more intuitively as Pr(∆t0i, θi | xi,xi−1, ...,xi−n) where n is the maximum sequence length. Output We express the predicted probability of the dynamic parameters (∆t0i, θi) as a bivariate GMM as described in Section C.1. Architecture We explored very similar architecture and hyperparamereters as the V2V-model, but found that we achieved much better results with a shorter maximum sequence length. We trained a number of models with a variety of sequence lengths {3, ..., 8, 13, 16, 21, 32}. Training We use the same procedure for training as the V2V-model. C.5 A2D MODEL Input The input to this network xi ∈ IR5 at each timestep i is slightly different to the V2V and V2D models. Similar to the V2V and V2D models, the first two elements are ∆vi (normalised relative position displacement for the i’th stroke), and the third element is ui ∈ {0, 1} (the pen state during the same stroke). However in this case the final two elements are the dynamic parameters for the previous stroke (∆t0i−1, θi−1), normalized to zero mean and unit standard deviation. Given input xi and its current internal state (ci,hi), the network learns to predict the dynamic parameters (∆t0i, θi) for the current stroke i, by learning the parameters for Pr(∆t0i, θi |xi, ci,hi). Again with an abuse of notation, this can be expressed more intuitively as Pr(∆t0i, θi | xi,xi−1, ...,xi−n) where n is the maximum sequence length. Output The output of this network is identical to that of the V2D model. Architecture We explored very similar architecture and hyperparamereters as the V2D model. Training We use the same procedure for training as the V2V-model. C.6 MODEL SELECTION We evaluated and batch rendered the outputs of many different architectures and models at different training epochs, and settled on models which were amongst those with the lowest validation error, but also produced visibily more desirable results. Once we picked the models, the results displayed are not cherry picked. The preprocessed IAM dataset contains 12087 samples (8460 in the training set) with maximum sequence length 305, minimum 6, median 103 and mean 103.9. For the V2V/V2D/A2V models trained on the IAM database we settle on an architecture of 3 recurrent layers, each with size 512, a maximum sequence length of 128, 20 Gaussians, dropout keep probability of 80% and no peepholes. For the augmented one-shot learning models we used similar architectures, but found that 2 recurrent layers each with size 256 was able to generalise better and produce more interesting results that both captured the prime inputs without overfitting. For V2V we used L2 normalisation on ∆vi input, and for A2D/V2D we used We also tried a number of different methods for normalising and representing ∆vi on the input to the models. We first tried normalising the components individually to have zero mean and unit standard deviation. We also tried normalising uniformly on L2 norm again to have zero mean and unit standard deviation. Finally, we tried normalised polar coordinates, both absolute and relative.
1. What is the main contribution of the paper in terms of sequence generation? 2. How does the proposed approach differ from existing methods in the field? 3. What are the strengths and weaknesses of the proposed method? 4. How does the reviewer assess the relevance of the chosen technologies and literature in the field? 5. What are the concerns regarding evaluation and comparison with other methods? 6. Is there any confusion or lack of clarity in the paper's content?
Review
Review The paper presents a method for sequence generation with a known method applied to feature extracted from another existing method. The paper is heavily oriented towards to chosen technologies and lacks in literature on sequence generation. In principle, rich literature on motion prediction for various applications could be relevant here. Recent models exist for sequence prediction (from primed inputs) for various applications, e.g. for skeleton data. These models learn complex motion w/o any pre-processing. Evaluation is a big concern. There is no quantitative evaluation. There is no comparision with other methods. I still wonder whether the intermediate representation (developed by Plamondon et al.) is useful in this context of a fully trained sequence generation model and whether the model could pick up the necessary transformations itself. This should be evaluated. Details: There are several typos and word omissions, which can be found by carefully rereading the paper. At the beginning of section 3, it is still unclear what the application is. Prediction of dynamic parameters? What for? Section 3 should give a better motivation of the work. Concerning the following paragraph "While such methods are superior for handwriting analysis and biometric purposes, we opt for a less precise method (Berio & Leymarie, 2015) that is less sensitive to sampling quality and is aimed at generating virtual target sequences that remain perceptually similar to the original trace. " This method has not been explained. A paper should be self-contained. The authors mentioned that the "V2V-model is conditioned on (...)"; but not enough details are given. Generally speaking, more efforts could be made to make the paper more self-contained.
ICLR
Title Sequence generation with a physiologically plausible model of handwriting and Recurrent Mixture Density Networks Abstract The purpose of this study is to explore the feasibility and potential benefits of using a physiological plausible model of handwriting as a feature representation for sequence generation with recurrent mixture density networks. We build on recent results in handwriting prediction developed by Graves (2013), and we focus on generating sequences that possess the statistical and dynamic qualities of handwriting and calligraphic art forms. Rather than model raw sequence data, we first preprocess and reconstruct the input training data with a concise representation given by a motor plan (in the form of a coarse sequence of ‘ballistic’ targets) and corresponding dynamic parameters (which define the velocity and curvature of the pen-tip trajectory). This representation provides a number of advantages, such as enabling the system to learn from very few examples by introducing artificial variability in the training data, and mixing of visual and dynamic qualities learned from different datasets. 1 INTRODUCTION Recent results (Graves, 2013) have demonstrated that, given a sufficiently large training data-set, Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) Recurrent Mixture Density Networks (RMDNs) (Schuster, 1999) are capable of learning and generating convincing synthetic handwriting sequences. In this study we explore a similar network architecture combined with an intermediate feature representation, given by the parameters of a physiologically plausible model of handwriting: the Sigma Lognormal model (Plamondon, 1995; Plamondon et al., 2014). In the work by Graves (2013) and subsequent derivations, the RMDN operates on raw sequences of points recorded with a digitizing device. In our approach we preprocess the training data using an intermediate representation that describes a form of “motor program” coupled with a sequence of dynamic parameters that describe the evolution of the pen tip. By doing so, we use a representation that is more concise (i.e. lower in dimensionality), meaningful (i.e. every data point is a high level segment descriptor of the trajectory), and is resolution independent. This project stems from the observation that human handwriting results from the orchestration of a large number of motor and neural subsystems, and is ultimately produced with the execution of complex and skillful motions. As such we seek a representation that abstracts the complex task of trajectory formation from the neural network, which is then rather focused on a higher level task of movement planning. Note that for the scope of this study, we do not implement text-to-handwriting synthesis (Graves, 2013), but rather focus on the task of generating sequences that possess the statistical and dynamic qualities of handwriting, which can be expanded to calligraphy, asemic handwriting, drawings and graffiti (Berio & Leymarie, 2015; Berio et al., 2016)). In particular, we focus on two distinct tasks: (1) learning and generating motor plans and (2) given a motor plan, 1Goldsmiths College, University of London. Department of Computing. 2École Polytechnique de Montréal, Canada. predicting the corresponding dynamic parameters that determine the visual and dynamic qualities of the pen trace. We then go on to show that this modular workflow can be exploited in ways such as: mixing of dynamic qualities between data-sets (a form of handwriting “style transfer” ) as well as learning from small datasets (a form of “one shot learning”). The remainder of this paper is organised as follows: in Section 2, after briefly summarising the background context, we then briefly describe the Sigma Lognormal model and RMDNs; in Section 3 we present the data preprocessing step and the RMDN models that build up our system; in Section 4 we propose various applications of the system, including learning handwriting representations from small datasets and mixing styles. 2 BACKGROUND Our study is grounded on a number of notions and principles that have been observed in the general study of human movement as well as in the handwriting synthesis/analysis field (known as Graphonomics (Kao et al., 1986)). The speed profile of aiming movements is typically characterised by a “bell shape” that is variably skewed depending on the rapidity of the movement (Lestienne, 1979; Nagasaki, 1989; Plamondon et al., 2013). Complex movements can be described by the superimposition of a discrete number of “ballistic” units of motion, which in turn can each be represented by the classic bell shaped velocity profile and are often referred to as strokes. A number of methods synthesise handwriting through the temporal superimposition of strokes, the velocity profile of which is modelled with a variety of functions including sinusoidal functions (Morasso & Mussa Ivaldi, 1982; Maarse, 1987; Rosenbaum et al., 1995), Beta functions (Lee & Cho, 1998; Bezine et al., 2004), and lognormals (Plamondon et al., 2009). In this study we rely on a family of models known as the Kinematic Theory of Rapid Human Movements, that has been developed by Plamondon et al. in an extensive body of work since the 1990’s (Plamondon, 1995; Plamondon et al., 2014). Plamondon et al. (2003) show that if we consider that a movement is the result of the parallel and hierarchical interaction of a large number of coupled linear systems, the impulse response of such a system to a centrally generated command asymptotically converges to a lognormal function. This assumption is attractive from a modelling perspective because it abstracts the high complexity of the neuromuscular system in charge of generating movements with a relatively simple mathematical model which further provides state of the art reconstruction of human velocity data (Rohrer & Hogan, 2006; Plamondon et al., 2013). A number of methods have used neural inspired approaches for the task of handwriting trajectory formation (Schomaker, 1992; Bullock et al., 1993; Wada & Kawato, 1993). Similarly to our proposed method, Ltaief et al. (2012) train a neural on a preprocessed dataset where the raw input data is reconstructed in the form of handwriting model parameters. Nair & Hinton (2005) use a sequence of neural networks to learn the motion of two orthogonal mass spring systems from images of handwritten digits for classification purposes. With a similar motivation to ours, Plamondon & Privitera (1996) use a Self Organising Map (SOM) to learn a sequence of ballistic targets, which describe a coarse motor plan of handwriting trajectories. Our method builds in particular on the work of Graves (2013), who describes a system that uses a recurrent mixture density networks (RMDNs) (Bishop, 1994) extended with a LSTM architecture (Hochreiter & Schmidhuber, 1997), to generate synthetic handwriting in a variety of styles. 2.1 SIGMA LOGNORMAL MODEL On the basis of Plamondon’s Kinematic Theory (Plamondon, 1995), the Sigma Lognormal (ΣΛ) model (Plamondon & Djioua, 2006) describes complex handwriting trajectories via the vectorial superimposition of a discrete number of strokes. With the assumption that curved handwriting movements are done by rotating the wrist, the curvilinear evolution of strokes is described with a circular arc shape. Each stroke is charactersied by a variably assymmetric "bell shape" speed profile, which is described with a (3 parameter) lognormal function. The planar evolution of a trajectory is then described by a sequence of virtual targets {vi}i=mi=1 , which define “imaginary” (i.e. not necessarily located along the generated trajectory) loci at which each consecutive stroke is aimed. The virtual targets provide a low level description of the motor plan for the handwriting trajectory. A smooth trajectory is then generated by integrating the velocity of each stroke over time. The trajectory smoothness can be defined by adjusting the activation-time offset of a given stroke with respect to the previous stroke, which is denoted with ∆t0i; a smaller time offset (i.e. a greater overlap between lognormal components) will result in a smoother trajectory (Fig. 1, c). The curvature of the trajectory can be varied by adjusting the central angle of each circular arc, which is denoted with θi. Equations and further details for the ΣΛ model can be found in Appendix A. A sequence of virtual targets provides a very sparse spatial description or “motor plan” for the trajectory evolution. The remaining stroke parameters, ∆t0i and θi, define the temporal, dynamic and geometric features of the trajectory and we refer to those as dynamic parameters. 2.2 RECURRENT MIXTURE DENSITY NETWORKS Mixture Density Networks (MDN) were introduced by Bishop (1994) in order to model and predict the parameters of a Gaussian Mixture Model (GMM), i.e. a set of means, covariances and mixture weights. Schuster (1999) showed that MDNs could be to model temporal data using RNNs. The author used Recurrent Mixture Density Networks (RMDN) to model the statistical properties of speech, and they were found to be more successful than traditional GMMs. Graves (2013) used LSTM RMDNs to model and synthesise online handwriting, providing the basis for extensions to the method, also used in Ha et al. (2016); Zhang et al. (2016). Note that in the case of a sequential model, the RMDN outputs a unique set of GMM parameters for each timestep t, allowing the probability distribution to change with time as the input sequence develops. Further details can be found in Appendix C.1. 3 METHOD We operate on discrete and temporally ordered sequences of planar coordinates. Similarly to Graves (2013), most of our results come from experiments made on the IAM online handwriting database (Marti & Bunke, 2002). However, we have made preliminary experiments with other datasets, such as the Graffiti Analysis Database (Lab, 2009) as well as limited samples collected in our laboratory from a user with a digitiser tablet. As a first step, we preprocess the raw data and reconstruct it in the form of ΣΛ model parameters Section 3.1. We then train and evaluate a number of RMDN models for two distinct tasks: 1. Virtual target prediction. We use the V2V-model for this task. Given a sequence of virtual targets, this model predicts the next virtual target. 2. Dynamic parameter prediction. For this task we trained and compared two model architectures. Given a sequence of virtual targets, the task of these models is to predict the corresponding dynamic parameters. The V2D-model is condititioned only on the previous virtual targets, whereas the A2D-model is conditioned on both the previous virtual targets and dynamic parameters. We then exploit the modularity of this system to conduct various experiments, details of which can found in Section 4. 3.1 PREPROCESSING: RECONSTRUCTING ΣΛ PARAMETERS A number of methods have been developed by Plamondon et. al in order to reconstruct ΣΛ-model parameters from digitised pen input data (O’Reilly & Plamondon, 2008; Plamondon et al., 2014; Fischer et al., 2014). These methods provide the ideal reconstruction of model parameters, given a high resolution digitised pen trace. While such methods are superior for handwriting analysis and biometric purposes, we opt for a less precise method (Berio & Leymarie, 2015) that is less sensitive to sampling quality and is aimed at generating virtual target sequences that remain perceptually similar to the original trace. We purposely choose to ignore the original dynamics of the input, and base the method on a geometric input data only. This is done in order to work with training sequences that are independent of sampling rate, and in sight of future developments in which we intend to extract handwriting traces from bitmaps, inferring causal/dynamic information from a static input as humans are capable of (Edelman & Flash, 1987; Freedberg & Gallese, 2007). Our method operates on a uniformly sampled input contour, which is then segmented in correspondence with perceptually salient key points: loci of curvature extrema modulated by neighbouring contour segments (Brault & Plamondon, 1993; Berio & Leymarie, 2015), which gives an initial estimate of each virtual target vi. We then (i) fit a circular arc to each contour segment in order to estimate the θi parameters and (ii) estimate the ∆t0i parameters by analysing the contour curvature in the region of each key point. Finally, (iii) we iteratively adjust the virtual target positions to minimise the error between the original trajectory and the one generated by the corresponding ΣΛ parameters. For Further details on the ΣΛ parameter reconstruction method, the reader is referred to Appendix B. 3.2 DATA AUGMENTATION We can exploit the ΣΛ parameterisation to generate many variations over a single trajectory, which are visually consistent with the original, and with a variability that is similar to the one that would be seen in multiple instances of handwriting made by the same writer (Fig. 3) (Djioua & Plamondon, 2008a; Fischer et al., 2014; Berio & Leymarie, 2015). Given a dataset of n training samples, we randomly perturb the virtual target positions and dynamic parameters of each sample np times, which results in a new augmented dataset of size n+ n× np where legibility and trajectory smoothness is maintained across samples. This would not be possible on the raw online dataset, as perturbations for each data-point would eventually result in a noisy trajectory. 3.3 PREDICTING VIRTUAL TARGETS WITH THE V2V-MODEL The V2V-model is conditioned on a history of virtual targets and given a new virtual target it predicts the next virtual target (hence the name V2V). Note that each virtual target includes the corresponding pen state — up (not touching the paper) or down (touching the paper). Repeatedly feeding the predicted virtual target back into the model at every timestep allows the model to synthesise sequences of arbitrary length. The implementation of this model is very similar to the handwriting prediction demonstrated by Graves (2013), although instead of operating directly on the digitised pen positions, we operate on the much coarser virtual target sequences which are extracted during the preprocessing step. The details of this model can be found in Appendix C.3 3.4 PREDICTING DYNAMIC PARAMETERS WITH THE V2D AND A2D MODELS The goal of these models is to predict the corresponding dynamic parameters (∆t0i, θi) for a given sequence of virtual targets. We train and compare two model architectures for this task. The V2Dmodel is conditioned on the history of virtual targets, and given a new virtual target, this model predicts the corresponding dynamic parameters (∆t0i, θi) for the current stroke (hence the name V2D). Running this model incrementally for every stroke of a given virtual target sequence allows us to predict dynamic parameters for each stroke. The implementation of this model is very similar to the V2V-model, and details can be found in Appendix C.4. At each timestep, the V2D model outputs and maintains internal memory of a probability distribution for the predicted dynamic parameters. However, the network has no knowledge of the parameters that are sampled and used. Hence, dynamic parameters might not be consistent across timesteps. This problem can be overcome by feeding the sampled dynamic parameters back into the model at the next timestep. From a human motor planning perspective this makes sense as, for a given drawing style, when we decide the curvature and smoothness of a stroke we will take into consideration the choices made in previously executed strokes. The A2D model predicts the corresponding dynamic parameters (∆t0i, θi) for the current stroke conditioned on a history of both virtual targets and dynamic parameters (i.e. all ΣΛ parameters - hence the name A2D). We use this model in a similar way to the V2D model, whereby we run it incrementally for every stroke of a given virtual target sequence. However, internally, at every timestep the predicted dynamic parameters are fed back into the model at the next timestep along with the virtual target from the given sequence. The details of this implementation can be found in Appendix C.5. 4 EXPERIMENTS AND RESULTS Predicting Virtual Targets. In a first experiment we use the V2V model, trained on the preprocessed IAM dataset, to predict sequences of virtual targets. We prime the network by first feeding it a sequence from the test dataset. This conditions the network to predict sequences that are similar to the prime. We can see from the results (Fig. 4) that the network is indeed able to produce sequences that capture the statistical qualities of the priming sequence, such as overall incline, proportions, and oscillation frequency. On the other hand, we observe that amongst the generated sequences, there are often patterns which do not represent recognisable letters or words. This can be explained by the high variability of samples contained in the IAM dataset, and by the fact that our representation is very concise, with each data-point containing high significance. As a result, the slightest variation in a prediction is likely to cause a large error in the next. To overcome this problem, we train a new model with a dataset augmented with 10× variations as described in Section 3.2. Due to our limited computing resources 1, we test this method on 1/10th of the dataset, which results in a new dataset with the same size as the original, but with a lower number of handwriting specimens with a number of subtle variations per specimen. With this approach, the network predictions maintain statistical similarity with the priming sequences, and patterns emerge that are more evocative of letters of the alphabet or whole words, with fewer unrecognizable patterns (Fig. 4). To validate this result, we also test the model’s performance training it on 1/10th of the dataset, without data augmentation, and the results are clearly inferior to the previous two models. This suggests that the data augmentation step is highly beneficial to the performance of the network. 1We are thus not able to thoroughly test the large network architectures that would be necessary to train on the whole augmented dataset. Predicting Dynamic Parameters. We first evaluate the performance of both the V2D and A2D models on virtual targets extracted from the test set. Remarkably, although the networks have not been trained on these sequences, both models predict dynamic parameters that result in trajectories that are readable, and are often similar to the target sample. We settle on the A2D model trained on a 3× augmented dataset, which we qualitatively assess to produce the best results (Fig. 5). We then proceed with applying the same A2D model on virtual targets generated by the V2V models primed on the test set. We observe that the predictions on sequences generated with the augmented dataset are highly evocative of handwriting and are clearly different depending on the priming sequence (Fig. 6, c), while the predictions made with the non-augmented dataset are more likely to resemble random scribbles rather than human readable handwriting (Fig. 6, b). This further confirms the utility of the data augmentation step. User defined virtual targets. The dynamic parameter prediction models can also be used in combination with user defined virtual target sequences (Fig. 7). Such a method can be used to quickly and interactively generate handwriting trajectories in a given style, by a simple point and click procedure. The style (in terms of curvature and dynamics) of the generated trajectory is determined by the data used to train the A2D model, and by priming the A2D model with different samples, we can apply different styles to the user defined virtual targets. One shot learning. In a subsequent experiment, we apply the data augmentaion method described in Section 3.2 to enable both virtual target and dynamic prediction models to learn from a small dataset of calligraphic samples recorded by a user using a digitiser tablet. We observe that with a low number of augmentations (50×) the models generate quasi-random outputs, and seem to learn only the left to right trend of the input. With higher augmentation (700×), the system generates outputs that are consistent to the human eye with the input data (Fig. 8). We also train our models using only a single sample (augmented 7000×) and again observe that the model is able to reproduce novel sequences that are similar to the input sample (Fig. 9). Naturally, the output is a form of recombination of the input, but this is sufficient to synthesise novel outputs that are qualitatively similar to the input. It should be noted that we are judging the performance of the one-shot learned models qualitatively, and we may not be testing the full limits of how well the models are able to generalise. On the other hand, these results, as well as the “style transfer” capabilities exposed in following section suggest a certain degree of generalisation. Style Transfer. Here, with a slight abuse of terminology, we utilise the term "style" to refer to the dynamic and geometric features (such as pen-tip acceleration and curvature) that determine the visual qualities of a handwriting trajectory. Given a sequence of virtual targets generated with the V2V model trained on one dataset, we can also predict the corresponding dynamic parameters with the A2D model trained on another. The result is an output that is similar to one dataset in lettering structure, but possesses the fine dynamic and geometric features of the other. If we visually inspect Fig. 10, we can see that both the sequence of virtual targets reconstructed by the dataset preprocessing method, and the trajectory generated over the same sequence of virtual targets with dynamic parameters learned from a different datasets, are both readable. This emphasises the importance of using perceptually salient points along the input for estimating key-points in the data-set preprocessing step (Section 3.1). Furthermore, we can perform the same type of operation within a single dataset, by priming the A2D model with the dynamic parameters of a particular training example, while feeding it with the virtual targets of another. To test this we train both (V2V, A2D) models on a corpus containing 5 samples of the same sentence written in different styles and then augmented 1400× (Fig. 11). We envision the utility of such as system in combination with virtual targets interactively specified by a user. 5 CONCLUSIONS AND FUTURE WORK We have presented a system that is able to learn the parameters for a physiologically plausible model of handwriting from an online dataset. We hypothesise that such a movement centric approach is advantageous as a feature representation for a number of reasons. Using such a representation provides a performance that is similar to the handwriting prediction demonstrated by Graves (2013) and Ha et al. (2016), with a number of additional benefits. These include the ability to: (i) capture both the geometry and dynamics of a hand drawn/written trace with a single representation, (ii) express the variability of different types of movement concisely at the feature level, (iii) demonstrate greater flexibility for procedural manipulations of the output, (iv) mix “styles” (applying curvature and dynamic properties from one example, to the motor plan of another), (v) learn a generative model from a small number of samples (n < 5), (vi) generate resolution independent outputs. The reported work provides a solid basis for a number of different future research avenues. As a first extension, we plan to implement the label/text input alignment method described in Graves’ original work that should allow us to synthesise readable handwritten text and also to provide a more thorough comparison of the two methods. Our method strongly relies on an accurate reconstruction of the input in the preprocessing step. Improvements should target especially parts of the latter method that depend on user tuned parameters, such as the identification of salient points along the input (which requires a final peak detection pass), and measuring the sharpness of the input in correspondence with salient points. ACKNOWLEDGMENTS The system takes as a starting point the original work developed by (Graves, 2013). We use Tensorflow, the open-source software library for numerical computation and deep learning (Abadi et al., 2015), and a rapid implementation was possible thanks to a public domain implementation developed by (Ha, 2015). A SIGMA LOGNORMAL MODEL The Sigma Lognormal model (Plamondon & Djioua, 2006) describes complex handwriting trajectories via the vectorial superimposition of lognormal strokes. The corresponding speed profile Λi(t) assumes a variably asymmetric "bell shape" which is described with a 3 parameter lognormal function Λi(t) = − 1 σi √ 2π(t− t0i) exp ( (ln(t− t0i)− µi)2 2σi2 ) (1) where t0i defines the activation time of a stroke and the parameters µi and σi determine the shape of the lognormal function. µi is referred to as log-time delay and is biologically interpreted as the rapidity of the neuromuscular system to react to an impulse generated by the central nervous system (Plamondon et al., 2003); σi is referred to as log-response time and determines the spread and asymmetry of the lognormal. The curvilinear evolution of strokes is described with a circular arc shape, which results in φi(t) = θi + θi [ 1 + erf ( ln(t− t0i)− µi σi √ 2 )] , (2) where θi is the central angle of the circular arc that defines the shape of the ith stroke. The planar evolution of a trajectory is defined by a sequence of virtual targets {vi}i=mi=1 , where a trajectory with m virtual targets will be characterised by m− 1 circular arc strokes. A ΣΛ trajectory, parameterised by the virtual target positions, is given by ξ(t) = v1 + ∫ t 0 dτΛi(τ) m−1∑ i=1 Φi(τ) (vi+1 − vi), (3) with Φi(t) = [ h(θi)cosφi(t) −h(θi)sinφi(t) h(θi)sinφi(t) −h(θi)cosφi(t) ] , and h(θi) = { 2θi 2sinθi if |sinθi| > 0, 1 otherwise, (4) which scales the extent of the stroke based on the ratio between the perimeter and the chord length of the circular arc. Intermediate parameterisation. In order to facilitate the precise specification of timing and profile shape of each stroke, we recur to an intermediate parametrisation that takes advantage of a few known properties of the lognormal (Djioua & Plamondon, 2008b) in order to define each stroke with (i) a time offset ∆ti with respect to the previous stroke, (ii) a stroke duration Ti and (iii) a shape parameter αi, which defines the skewedness of the lognormal. The corresponding ΣΛ parameters {t0i, µi, σi} can be then computed with: σi = ln(1 + αi), (5) µi = −ln ( −e −3σi − e3σi Ti ) , (6) and t0i = t1i − eµ−3σ t1i = t1(i−1) + ∆ti t1(0) = 0, (7) where t1i is the onset time of the lognormal stroke profile. As α approaches 0, the shape of the lognormal converges to a Gaussian, with mean t1 + eµ−σ 2 (the mode of the lognormal) and standard deviation d6 . B RECONSTRUCTING ΣΛ PARAMETERS FROM AN ONLINE DATASET The ΣΛ parameter reconstruction method operates on a input contour uniformly sampled at a fixed distance which is defined depending on the extent of the input, where we denote the kth sampled point along the input with p[k]. The input contour is then segmented in correspondence with perceptually salient key points, which correspond with loci of curvature extrema modulated by neighbouring contour segments (Brault & Plamondon, 1993; Berio & Leymarie, 2015). The proposed approach shares strong similarities with previous work done for (i) compressing online handwriting data with a circular-arc based segmentation (Li et al., 1998) and (ii) for generating synthetic data for handwriting recognisers (Varga et al., 2005). The parameter reconstruction algorithm can be summarised with the following steps: • Find m key-points in the input contour. • Fit a circular arc to each contour segment defined between two consecutive key-points (defining individual strokes), and obtain an estimate of each curvature parameter θi. • For each stroke compute the corresponding ∆ti parameter by analysing the curvature signal in the region of the corresponding key-point. • Define an initial sequence of virtual targets with m positions corresponding with each input key-point. • Repeat the following until convergence or until a maximum number of iterations is reached Berio & Leymarie (2015): – Integrate the ΣΛ trajectory with the current parameter estimate. – Identify m key-points in the generated trajectory. – Move the virtual target positions to minimise the distance between the key-points of the generated trajectory and the key-points on the input contour. The details for each step are highlighted in the following paragraphs. Estimating input key-points. Finding significant curvature extrema (which can be counted as convex and concave features for a closed/solid shape) is an active area of research, as relying on discrete curvature measurements remains challenging. We currently rely on a method described by Feldman & Singh (2005), and supported experimentally by De Winter & Wagemans (2008): first we measure the turning angle at each position of the input p[k] and then compute a smooth version of the signal by convolving it with a Hanning window. We assume that the turning angles have been generated by a random process with a Von Mises distribution with mean at 0 degrees, which corresponds with giving maximum probability to a straight line. We then measure the surprisal (i.e. the negative logarithm of the probability) for each sample as defined by Feldman & Singh (2005), which normalised to the [0, 1] range simplifies to: 1− cos(θ[k]), (8) where θ[k] is the (smoothed) turning angle. The first and last sample indices of the surprisal signal together with its local maxima results in m key-point indices {ẑi}. The corresponding key-points along the input contour are then given by {p [ẑi]}. Estimating stroke curvature parameters. For each section of the input contour defined between two consecutive key-points, we estimate the corresponding stroke curvature parameter θi by first computing a least square fit of a circle to the contour section. We then compute the internal angle of the arc supported between the two key-points, which is equal 2θi, i.e. two times the corresponding curvature parameter θi. Estimating stroke time-overlap parameters. This step is based on the observation that a smaller values of ∆t0i, i.e. a greater time overlap between strokes, result in smoother trajectories. On the contrary, a sufficiently large value of ∆t0i will result in a sharp corner in proximity of the corresponding virtual target. We exploit this notion, and compute an estimate of the ∆t0i parameters by examining the sharpness of the input contour in the region of each key-point. To do so we examine the previously computed turning angle surprisal signal, in which we can observe that sharp corners in the contour correspond with sharper peaks, while smoother corners correspond with smooth peaks with a larger spread. By treating the surprisal signal as a probability density function, we can then use statistical methods to measure the shape of each peak with a mixture of parametric distributions, and examine the shape of each mixture component in order to get an estimate of the corresponding sharpness along the input contour. To do so we employ a variant of Expectation Maximisation (EM) (Dempster et al., 1977) in which we treat the distance along the contour as a random variable weighted by the corresponding signal amplitude normalised to the [0, 1] range. Once the EM algorithm has converged, we treat each mixture component as a radial basis function (RBF) centred at the corresponding mean, and use linear regression as in Radial Basis Function Networks (Stulp & Sigaud, 2015) to fit the mixture parameters to the original signal (Calinon, 2016). Finally we generate an estimate of sharpness λi (bounded in the [0, 1] range) for each key point using as a logarithmic function of the mixture parameters and weights. The corresponding ∆t0i parameters are then given by ∆ti = ∆tmin + (∆tmax −∆tmin)λi , (9) where ∆tmin and ∆tmax are user specified parameters that determine the range of the ∆t0i estimates. Note that we currently utilise an empirically defined function for this task. But in future steps, we intend to learn the mapping between sharpness and mixture component parameters from synthetically samples generated with the ΣΛ model (for which ∆t0i, and consequently λi, are known). Iteratively estimating virtual target positions. The loci along the input contour corresponding with the estimated key-points provide an initial estimate for a sequence of virtual targets, where each virtual target position is given by vi = p[ẑi]. Due to the trajectory-smoothing effect produced by the time overlaps, the initial estimate will result in a generated trajectory that is likely to have a reduced scale with respect to the input we wish to reconstruct (Varga et al., 2005). In order to produce a more accurate reconstruction, we use an iterative method that shifts each virtual target towards a position that will minimise the error between the generated trajectory and the reconstructed input. To do so, we compute an estimate of m output key-points {ξ (zi)} in the generated trajectory, where z2, ..., zm are the time occurrences at which the influence of one stroke exceeds the previous. These will correspond with salient points along the trajectory (extrema of curvature) and can be easily computed by finding the time occurrence at which two consecutive lognormals intersect. Similarly to the input key-point case, ξ(z1) and ξ(zm) respectively denote the first and last points of the generated trajectory. We then iteratively adjust the virtual target positions in order to move each generated key-point ξ(zi) towards the corresponding input key-point p[ẑi] with: vi ← vi + p [ẑi]− ξ (zi) , (10) The iteration continues until the Mean Square Error (MSE) of the distances between every pair p [ẑi] and ξ(zi) is less than an experimentally set threshold or until a maximum number of iterations is reached (Fig. 16). This method usually converges to a good reconstruction of the input within few iterations (usually < 5). Interestingly, even though the dynamic information of the input is discarded, the reconstructed velocity profile is often similar to the original (in number of peaks and shape), which can be explained by the extensively studied relationships between geometry and dynamics of movement trajectories (Viviani & Terzuolo, 1982; Lacquaniti et al., 1983; Viviani & Schneider, 1991; Flash & Handzel, 2007). C RMDN MODEL DETAILS In order to increase the expressive generative capabilities of our networks, we train them to model parametric probability distributions. Specifically, we use Recurrent Mixture Density Networks that output the parameters of a bivariate Gaussian Mixture Model. C.1 BIVARIATE RECURRENT MIXTURE DENSITY NETWORK If a target variable zt can be expressed as a bivariate GMM, then for K Gaussians we can use a network architecture with output dimensions of 6K. This output vector would then consist of (µ̂t ∈ IR2K , σ̂t ∈ IR2K , ρ̂t ∈ IRK , π̂t ∈ IRK), which we use to calculate the parameters of the GMM via (Graves, 2013) µkt = µ̂ k t : means for k’th Gaussian, µ k t ∈ IR 2 σkt = exp(σ̂ k t ) : standard deviations for k’th Gaussian, σ k t ∈ IR 2 ρkt = tanh(ρ̂ k t ) : correlations for k’th Gaussian, ρ k t ∈ (−1, 1) πkt = softmax(π̂ k t ) : mixture weight for k’th Gaussian , K∑ k πkt = 1 (11) We can then formulate the probability distribution function Pt at timestep t as Pt = K∑ k πktN(zt | µkt ,σkt , ρkt ), where (12) N (x | µ,σ, ρ) = 1 2πσ1σ2 √ 1− ρ2 exp [ − Z 2(1− ρ2) ] , and (13) Z = (x1 − µ1)2 σ21 + (x2 − µ2)2 σ22 − 2ρ(x1 − µ2)(x2 − µ2) σ1σ2 (14) C.2 TRAINING OBJECTIVE If we let θ denote the parameters of a network, and given a training set S of input-target pairs (x ∈X, ŷ ∈ Ŷ ), our training objective is to find the set of parameters θML which has the maximum likelihood (ML). This is the θ that maximises the probability of training set S and is formulated as (Graves, 2008) θML = arg max θ Pr(S | θ) (15) = arg max θ S∏ (x,ŷ) Pr(ŷ | x,θ). (16) Since the logarithm is a monotonic function, a common method for maximizing this likelihood is minimizing its negative logarithm, also known as the Negative Log Likelihood (NLL), Hamiltonian or surprisal (Lin & Tegmark, 2016). We can then define our cost function J as J = − ln S∏ (x,ŷ) Pr(ŷ | x,θ) (17) = − S∑ (x,ŷ) ln Pr(ŷ | x,θ). (18) For a bivariate RMDN, the objective function can be formulated by substituting eqn. (12) in place of Pr(ŷ | x,θ) in eqn. (18). C.3 V2V MODEL Input At each timestep i, the input to the V2V model is xi ∈ IR3, where the first two elements are given by ∆vi (the relative position displacement for the i’th stroke, i.e. between the i’th virtual target and the next), and the last element is ui ∈ {0, 1} (the pen-up state during the same stroke). Given input xi and its current internal state (ci,hi), the network learns to predict xi+1, by learning the parameters for the Probability Density Function (PDF) : Pr(xi+1 |xi, ci,hi). With a slight abuse of notation, this can be expressed more intuitively as Pr(xi+1 | xi,xi−1, ...,xi−n) where n is the maximum sequence length. Output We express the predicted probability of ∆vi as a bivariate GMM as described in Section C.1, and ui as a Bernoulli distribution. Thus for K Gaussians the network has output dimensions of (6K + 1) which, in addition to eqn. (11), contains êi which we use to calculate the pen state probability via (Graves, 2013) ei = 1 1 + exp(êi) , ei ∈ (0, 1) (19) Architecture We use Long Short-Term Memory (Hochreiter & Schmidhuber, 1997) networks with input, output and forget gates (Gers et al., 2000), and we use Dropout regularization as described by Pham et al. (2014). We employ both a grid search and a random search (Bergstra & Bengio, 2012) on various hyperparameters in the ranges: sequence length {64, 128}, number of hidden recurrent layers {1, 2, 3}, dimensions per hidden layer {64, 128, 256, 400, 512, 900, 1024}, number of Gaussians {5, 10, 20}, dropout keep probability {50%, 70%, 80%, 90%, 95%} and peepholes {with, without}. For comparison we also tried a deterministic architecture whereby instead of outputing a probability distribution, the network outputs a direct prediction for xi+1. As expected, the network was unable to learn this function, and all sequence of virtual targets synthesized with this method simply travel in a repeating zig-zag line. Training We use a form of Truncated Backpropagation Through Time (BPTT) (Sutskever, 2013) whereby we segment long sequences into overlapping segments of maximum length n. In this case long-term dependencies greater than length n are lost, however with enough overlap the network can effectively learn a sliding window of length n timesteps. We shuffle our training data and reset the internal state after each sequence. We empirically found an overlap factor of 50% to perform well, though further studies are needed to confirm the sensitivity of this figure. We use dynamic unrolling of the RNN, whereby the number of timesteps to unroll to is not set at compile time, in the architecture of the network, but unrolled dynamically while training, allowing variable length sequences. We also experimented with repeating sequences which were shorter than the maximum sequence length n, to complete them to length n. We found that for our case they produced desirable results, with some side-effects which we discuss in later sections. We split our dataset into training: 70%, validation: 20% and test:10% and use the Adam optimizer (Kingma & Ba, 2014) with the recommended hyperparameters. To prevent exploding gradients we clip gradients by their global L2 norm as described in (Pascanu et al., 2013). We tried thresholds of both 5 and 10, and found 5 to provide more stability. We formulate the loss function J to minimise the Negative Log Likelihood as described in Section C.2 using the probability density functions described in eqn. (12) and eqn. (19). C.4 V2D MODEL Input The input to this network at each timestep i is identical to that of the V2V-model, xi ∈ IR3, where the first two elements are ∆vi (normalised relative position displacement for the i’th stroke), and ui ∈ {0, 1} (the pen state during the same stroke). Given input xi and its current internal state (ci,hi), the network learns to predict the dynamic parameters (∆t0i, θi) for the current stroke i, by learning the parameters for Pr(∆t0i, θi | xi, ci,hi). Again with an abuse of notation, this can be expressed more intuitively as Pr(∆t0i, θi | xi,xi−1, ...,xi−n) where n is the maximum sequence length. Output We express the predicted probability of the dynamic parameters (∆t0i, θi) as a bivariate GMM as described in Section C.1. Architecture We explored very similar architecture and hyperparamereters as the V2V-model, but found that we achieved much better results with a shorter maximum sequence length. We trained a number of models with a variety of sequence lengths {3, ..., 8, 13, 16, 21, 32}. Training We use the same procedure for training as the V2V-model. C.5 A2D MODEL Input The input to this network xi ∈ IR5 at each timestep i is slightly different to the V2V and V2D models. Similar to the V2V and V2D models, the first two elements are ∆vi (normalised relative position displacement for the i’th stroke), and the third element is ui ∈ {0, 1} (the pen state during the same stroke). However in this case the final two elements are the dynamic parameters for the previous stroke (∆t0i−1, θi−1), normalized to zero mean and unit standard deviation. Given input xi and its current internal state (ci,hi), the network learns to predict the dynamic parameters (∆t0i, θi) for the current stroke i, by learning the parameters for Pr(∆t0i, θi |xi, ci,hi). Again with an abuse of notation, this can be expressed more intuitively as Pr(∆t0i, θi | xi,xi−1, ...,xi−n) where n is the maximum sequence length. Output The output of this network is identical to that of the V2D model. Architecture We explored very similar architecture and hyperparamereters as the V2D model. Training We use the same procedure for training as the V2V-model. C.6 MODEL SELECTION We evaluated and batch rendered the outputs of many different architectures and models at different training epochs, and settled on models which were amongst those with the lowest validation error, but also produced visibily more desirable results. Once we picked the models, the results displayed are not cherry picked. The preprocessed IAM dataset contains 12087 samples (8460 in the training set) with maximum sequence length 305, minimum 6, median 103 and mean 103.9. For the V2V/V2D/A2V models trained on the IAM database we settle on an architecture of 3 recurrent layers, each with size 512, a maximum sequence length of 128, 20 Gaussians, dropout keep probability of 80% and no peepholes. For the augmented one-shot learning models we used similar architectures, but found that 2 recurrent layers each with size 256 was able to generalise better and produce more interesting results that both captured the prime inputs without overfitting. For V2V we used L2 normalisation on ∆vi input, and for A2D/V2D we used We also tried a number of different methods for normalising and representing ∆vi on the input to the models. We first tried normalising the components individually to have zero mean and unit standard deviation. We also tried normalising uniformly on L2 norm again to have zero mean and unit standard deviation. Finally, we tried normalised polar coordinates, both absolute and relative.
1. What is the focus of the paper, and how does it differ from other works in the field? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to prior works? 3. How effective is the paper in providing motivation and evidence for its contributions? 4. Are there any concerns or limitations regarding the approach, especially in terms of its application or scalability? 5. How could the paper be improved, such as providing stronger theoretical foundations or experimental results?
Review
Review This paper takes a model based on that of Graves and retrofits it with a representation derived from the work of Plamondon. part of the goal of deep learning has been to avoid the use of hand-crafted features and have the network learn from raw feature representations, so this paper is somewhat against the grain. The paper relies on some qualitative examples as demonstration of the system, and doesn't seem to provide a strong motivation for there being any progress here. The paper does not provide true text-conditional handwriting synthesis as shown in Graves' original work. Be more consistent about your bibliography (e.g. variants of Plamondon's own name, use of "et al." in the bibliography etc.)
ICLR
Title Generative model based on minimizing exact empirical Wasserstein distance Abstract Generative Adversarial Networks (GANs) are a very powerful framework for generative modeling. However, they are often hard to train, and learning of GANs often becomes unstable. Wasserstein GAN (WGAN) is a promising framework to deal with the instability problem as it has a good convergence property. One drawback of the WGAN is that it evaluates the Wasserstein distance in the dual domain, which requires some approximation, so that it may fail to optimize the true Wasserstein distance. In this paper, we propose evaluating the exact empirical optimal transport cost efficiently in the primal domain and performing gradient descent with respect to its derivative to train the generator network. Experiments on the MNIST dataset show that our method is significantly stable to converge, and achieves the lowest Wasserstein distance among the WGAN variants at the cost of some sharpness of generated images. Experiments on the 8-Gaussian toy dataset show that better gradients for the generator are obtained in our method. In addition, the proposed method enables more flexible generative modeling than WGAN. 1 INTRODUCTION Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) are a powerful framework of generative modeling which is formulated as a minimax game between two networks: A generator network generates fake-data from some noise source and a discriminator network discriminates between fake-data and real-data. GANs can generate much more realistic images than other generative models like variational autoencoder (Kingma & Welling, 2014) or autoregressive models (van den Oord et al., 2016), and have been widely used in high-resolution image generation (Karras et al., 2018), image inpainting (Yu et al., 2018), image-to-image translation (Isola et al., 2017), to mention a few. However, GANs are often hard to train, and various ways to stabilize training have been proposed by many recent works. Nonetheless, consistently stable training of GANs remains an open problem. GANs employ the Jensen-Shannon (JS) divergence to measure the distance between the distributions of real-data and fake-data (Goodfellow et al., 2014). Arjovsky et al. (2017) provided an analysis of various distances and divergence measures between two probability distributions in view of their use as loss functions of GANs, and proposed Wasserstein GAN (WGAN) which has better theoretical properties than the original GANs. WGAN requires that the discriminator (called the critic in Arjovsky et al. (2017)) must lie within the space of 1-Lipschitz functions to evaluate the Wasserstein distance via the Kantorovich-Rubinstein dual formulation. Arjovsky et al. (2017) further proposed implementing the critic with a deep neural network and applied weight clipping in order to ensure that the critic satisfies the Lipschitz condition. However, weight clipping limits the critic’s function space and can cause gradients in the critic to explode or vanish if the clipping parameters are not carefully chosen (Arjovsky et al., 2017; Gulrajani et al., 2017). WGAN-GP (Gulrajani et al., 2017) and Spectral Normalization (SN) (Miyato et al., 2018) apply regularization and normalization, respectively, on the critic trying to make the critic 1-Lipschitz, but they fail to optimize the true Wasserstein distance. In the latest work, Liu et al. (2018) proposed a new WGAN variant to evaluate the exact empirical Wasserstein distance. They evaluate the empirical Wasserstein distance between the empirical distributions of real-data and fake-data in the discrete case of the Kantorovich-Rubinstein dual for- mulation, which can be solved efficiently because the dual problem becomes a finite-dimensional linear-programming problem. The generator network is trained using the critic network learnt to approximate the solution of the dual problem. However, the problem of approximation error by the critic network remains. In this paper, we propose a new generative model without the critic, which learns by directly evaluating gradient of the exact empirical optimal transport cost in the primal domain. The proposed method corresponds to stochastic gradient descent of the optimal transport cost. 2 WASSERSTEIN GAN Arjovsky et al. (2017) argued that JS divergences are potentially not continuous with respect to the generator’s parameters, leading to GANs training difficulty. They proposed instead using the Wasserstein-1 distance W1(q, p), which is defined as the minimum cost of transporting mass in order to transform the distribution q into the distribution p. Under mild assumptions, W1(q, p) is continuous everywhere and differentiable almost everywhere. The WGAN objective function is constructed using the Kantorovich-Rubinstein duality (Villani, 2009, Chapter 5) as W1(Pr,Pg) = max D∈D { Ex∼Pr [D(x)]− Ey∼Pg [D(y)] } , (1) to obtain min G max D∈D { Ex∼Pr [D(x)]− Ey∼Pg [D(y)] } , (2) where D is the set of all 1-Lipschitz functions, where Pr is the real-data distribution, and where Pg is the generator distribution implicitly defined by y = G(z), z ∼ p(z). Minimization of this objective function with respect to G with optimal D is equivalent to minimizing W1(Pr,Pg). Arjovsky et al. (2017) further proposed implementing the critic D in terms of a deep neural network with weight clipping. Weight clipping keeps the weight parameter of the network lying in a compact space, thereby ensuring the desired Lipschitz condition. For a fixed network architecture, however, weight clipping may significantly limit the function space to a quite small fraction of all possible 1-Lipschitz functions representable by networks with the prescribed architecture. 3 RELATED WORKS Gulrajani et al. (2017) proposed introduction of gradient penalty (GP) to the WGAN objective function in place of the 1-Lipschitz condition in the Kantorovich-Rubinstein dual formulation, in order to explicitly encourage the critic to have gradients with magnitude equal to 1. Since enforcing the constraint of unit-norm gradient everywhere is intractable, they proposed enforcing the constraint only along straight line segments, each connecting a real-data point and a fake-data point. The resulting learning scheme, which is called the WGAN-GP, was shown to perform well experimentally. It was pointed out, however (Miyato et al., 2018), that WGAN-GP is susceptible to destabilization due to gradual changes of the support of the generator distribution as learning progresses. Furthermore, the critic can easily violate the Lipschitz condition in practice, so that there is no guarantee that WGAN-GP optimizes the true Wasserstein distance. SN, proposed by Miyato et al. (2018), is based on the observation that the Lipschitz norm of a critic represented by a multilayer neural network is bounded from above by the product, across all layers, of the Lipschitz norms of the activation functions and the spectral norms of the weight matrices, and normalizes each of the weight matrices with its spectral norm to ensure the resulting critic to satisfy the desired Lipschitz condition. It is well known that, for any m× n matrix W = (wij), the max norm ‖W‖max = max{|wij |} and the spectral norm σ(W ) satisfy the inequality ‖W‖max ≤ σ(W ) ≤ √mn‖W‖max. This implies that the bound of the Lipschitz constant provided via weight clipping can be loose compared with that via SN. In other words, SN is expected to provide a much tighter bound for the Lipschitz condition than weight clipping, and accordingly, the function space for the critic under SN is larger than that under weight clipping. The function space under SN is, however, still a subset of the set of all functions satisfying the Lipschitz condition, and consequently, the resulting estimate for the Wasserstein distance is a lower bound of the true Wasserstein distance. Furthermore, one cannot tell within the framework of SN how good the estimate is. Liu et al. (2018) proposed a new formulation to evaluate the Wasserstein distance, which is equivalent to the discrete case of the Kantorovich-Rubinstein dual formulation under a mild assumption and is more tractable due to obviating the need for the Lipschitz condition. This problem is solved in a two-step fashion, and thus the method proposed in Liu et al. (2018) is called the WGAN-TS. First, one estimates the Wasserstein distance on the basis of finite real- and fake-data points. The empirical Wasserstein distance is evaluated exactly via solving the linear-programming version of the Kantorovich-Rubinstein dual to obtain the optimizer. Second, one approximates the optimizer obtained in the first step via regression using a deep neural network to parameterize the critic and obtains its gradient. WGAN-TS can evaluate the Wasserstein distance more accurately than WGAN, WGAN-GP and WGAN-SN (WGAN with SN), but there are not only approximation errors from using finite samples to evaluate the empirical Wasserstein distance but also those from deep regression in the second step, resulting in not being able to minimize the Wasserstein distance directly. 4 PROPOSED METHOD The proposed method in this paper is based on the fact that the optimal transport cost between two probability distributions can be evaluated efficiently when the distributions are uniform over finite sets of the same cardinality. Our proposal is to evaluate empirical optimal transport costs on the basis of equal-size sample datasets of real- and fake-data points. The optimal transport cost between the real-data distribution Pr and the generator distribution Pg is defined as C(Pr,Pg) = inf γ∈Π(Pr,Pg) E(x,y)∼γ [c(x, y)], (3) where c(x, y) is the cost of transporting one unit mass from x to y, assumed differentiable with respect to its arguments almost everywhere, and where Π(Pr,Pg) denotes the set of all couplings between Pr and Pg , that is, all joint probability distributions that have marginals Pr and Pg . Let D = {xj |xj ∼ Pr(x)} be a dataset consisting of independent and identically-distributed (iid) real-data points, and F = {yi|yi ∼ Pg(y)} be a dataset consisting of iid fake-data points sampled from the generator. Let PD and PF be the empirical distributions defined by the datasets D and F , respectively. We further assume in the following that |D| = |F | = N holds. The empirical optimal transport cost Ĉ(D,F ) = C(PD,PF ) between the two datasets D and F is formulated as a linear-programming problem, as Ĉ(D,F ) = C(PD,PF ) = 1 N min M N ∑ i=1 N ∑ j=1 Mi,jc(xj , yi) (4) s.t. N ∑ j=1 Mi,j = 1, ∀i ∈ {1, . . . , N}, (5) N ∑ i=1 Mi,j = 1, ∀j ∈ {1, . . . , N}, (6) Mi,j ≥ 0, ∀i ∈ {1, . . . , N}, ∀j ∈ {1, . . . , N}. (7) It is known (Villani, 2003) that the linear-programming problem (4)–(7) admits solutions which are permutation matrices. One can then replace the constraints Mi,j ≥ 0 in (7) with Mi,j ∈ {0, 1} without affecting the optimality. The resulting optimization problem is what is called the linear sum assignment problem, which can be solved more efficiently than the original linear-programming problem. As far as the authors’ knowledge, the most efficient algorithm to date for solving a linear sum assignment problem has time complexity of O(N2.5 log(NC)), where C = maxi,j c(xj , yi) when one scales up the costs {c(xj , yi)|xj ∈ D, yi ∈ F} to integers (Burkard et al., 2012, Chapter 4). This is a problem to find the optimal transport plan, where Mi,j = 1 is corresponding to transporting fake-data point yi ∈ F to real-data point xj ∈ D, and where the objective is to minimize the average transport cost N−1 ∑N i=1 ∑N j=1 Mi,jc(xj , yi). Figure 1 shows a two-dimensional example of this problem and its solution. One requires evaluations not only of the optimal transport cost C(Pr,Pg) but also of its derivative in order to perform learning of the generator with backpropagation. Let θ denote the parameter of the generator, and let ∂θC denote the derivative of the optimal transport cost C with respect to θ. Conditional on z, the generator output G(z) is a function of θ. Hence, in order to estimate ∂θC, in our framework one has to evaluate ∂θĈ. In general, it is difficult to differentiate (4) with respect to generator output yi, as the optimal transport plan M ∗ can be highly dependent on yi. Under the assumption |D| = |F | = N which we adopt here, however, the feasible set for M is the set of all permutation matrices and is a finite set. It then follows that, as a generic property, the optimal transport plan M∗ is unchanged under small enough perturbations of F (see Figure 1). We take advantage of this fact and regard M∗ as independent of yi. Now that differentiation of (4) becomes tractable, we use (4) as the loss function of the generator and update the generator with the direct gradient of the empirical optimal transport cost, as ∂θĈ = N −1 ∑N i,j=1 M ∗ i,j∂yic(xj , yi)∂θG(zi). Although the framework described so far is applicable to any optimal transport cost, several desirable properties can be stated if one specializes in the Wasserstein distance. Assume, for a given p ≥ 1, that the real-data distribution Pr and the generator distribution Pg have finite moments of order p. The Wasserstein-p distance between Pr and Pg is defined in terms of the optimal transport cost with c(x, y) = ‖x− y‖p as Wp(Pr,Pg) = C(Pr,Pg) 1/p. (8) Due to the law of large numbers, the empirical distributions PD and PF converge weakly to Pr and Pg , respectively, as N → ∞. It is also known (Villani, 2009, Theorem 6.9) that the Wasserstein-p distance Wp metrizes the space of probability measures with finite moments of order p. Consequently, the empirical Wasserstein distance Ŵp(D,F ) is a consistent estimator of the true Wasserstein distance Wp(Pr,Pg). Furthermore, with the upper bound of the error of the estimator |Ŵp(D,F )−Wp(Pr,Pg)| ≤Wp(PD,Pr) +Wp(PF ,Pg), (9) which is derived on the basis of the triangle inequality, as well as with the upper bounds available for expectations of Wp(PD,Pr) and Wp(PF ,Pg) under mild conditions (Weed & Bach, 2017), one can see that Ŵp(D,F ) is an asymptotically unbiased estimator of Wp(Pr,Pg). Note that our method can directly evaluate the empirical Wasserstein distance without recourse to the Kantorovich-Rubinstein dual. Hence, our method does not use a critic and is therefore no longer a GAN. It is also applicable to any optimal transport cost. We summarize the proposed method in Algorithm 1. 5 EXPERIMENTS 5.1 RESULTS ON MNIST WITH CONVOLUTIONAL NEURAL NETWORK We first show experimental results on the MNIST dataset of handwritten digits. In this experiment, we resized the images to resolution 64 × 64 so that we can use the convolutional neural networks Algorithm 1 The proposed method. 1: Input: Real-data samples Xreal, batch size N , Adam parameters α, β1, β2 2: Output: Gθ 3: Initialize θ. 4: while θ has not converged do 5: Sample {xi}i∈{1,...,N} ∼ Xreal from real-data. 6: Sample {zj}j∈{1,...,N} ∼ p(z) from random noises. 7: Let yj = Gθ(zj), ∀j ∈ {1, . . . , N}. 8: Solve (4)–7 to obtain M∗. 9: gθ ← ∂θĈ = N−1 ∑N i,j=1 M ∗ i,j∂yic(xj , yi)∂θGθ(zi) 10: θ ← Adam(gθ, θ, α, β1, β2) 11: end while described in Appendix A.1 as the critic and the generator. In all methods, the batch size was set to 64 and the prior noise distribution was the 100-dimensional standard normal distribution. The maximum number of iterations in training of the generator was set to 30,000. The Wasserstein-1 distance with c(x, y) = ‖x− y‖1 was used. More detailed settings are described in Appendix B.1. Although several performance metrics have been proposed and are commonly used to evaluate variants of WGAN, we have decided to use the empirical Wasserstein distance (EWD) to compare performance of all methods. It is because all the methods adopt objective functions that are based on the Wasserstein distance, and because EWD is a consistent and asymptotically unbiased estimator of the Wasserstein distance and can efficiently be evaluated, as discussed in Section 4. Table 1 shows EWD evaluated with 256 samples and computation time per generator update for each method. For reference, performance comparison with the Fréchet Inception Distance (Heusel et al., 2017) and the Inception Score (Salimans et al., 2016), which are commonly used as performance measures to evaluate GANs using feature space embedding with an inception model, is shown in Appendix C. The proposed method achieved a remarkably small EWD and computational cost compared with the variants of WGAN. Our method required the lowest computational cost in this experimental setting mainly because it does not use the critic. Although we think that the batch size used in the experiment of the proposed method was appropriate since the proposed method achieved lower EWD, if a larger batch size would be required in training, it will take much longer time to solve the linear sum assignment problem (4)–(7). We further investigated behaviors of the methods compared in more detail, on the basis of EWD. WGAN-SN failed to learn. The loss function of the critic showed divergent movement toward −∞, and the behaviors of EWD in different trials were different even though the behaviors of the critic loss were the same (Figure 2 (a) and (b)). WGAN training never failed in 5 trials, and EWD improved stably without sudden deterioration. Although training with WGAN-GP proceeded favorably in initial stages, at certain points the gradient penalty term started to increase, causing EWD to deteriorate (Figure 2 (c)). This happened in all 5 trials. Since gradient penalty is a weaker restriction than weight clipping, the critic may be more likely to cause extreme behaviors. We examined both WGAN-TS with and without weight scaling. Whereas WGAN-TS with weight scaling did not fail in training but achieved higher EWD than WGAN, WGAN-TS without weight scaling achieved lower EWD than WGAN at the cost of the stability of training (Figure 3). The proposed method was trained stably and never failed in 5 trials. As mentioned in Section 3, the critic in WGAN-TS simply regresses the optimizer of the empirical version of the Kantorovich-Rubinstein dual. Thus, there is no guarantee that the critic will satisfy the 1-Lipschitz condition. Liu et al. (2018) pointed out that it is indeed practically problematic with WGAN-TS, and proposed weight scaling to ensure that the critic satisfies the desired condition. We have empirically found, however, that weight scaling exhibited the following trade-off (Figure 3). Without weight scaling, training of WGAN-TS suddenly deteriorated in some trials because the critic came to not satisfy the Lipschitz condition. With weight scaling, on the other hand, the regression error of the critic with respect to the solution increased and the EWD became worse. The proposed method directly solves the empirical version of the optimal transport problem in the primal domain, so that it is free from such trade-off. Figure 4 shows fake-data images generated by the generators trained with WGAN, WGAN-GP, WGAN-TS, and the proposed method. Although one can identify the digits for the generated images with the proposed method most easily, these images are less sharp. Among the generated images with the other methods, one can notice several images which have almost the same appearance as real-data images, whereas in the proposed method, such fake-data images are not seen and images that seem averaged real-data images belonging to the same class often appear. This might imply that merely minimizing the Wasserstein distance between the real-data distribution and the generator distribution in the raw-image space may not necessarily produce realistic images. 5.2 GRADIENT OPTIMALITY OF GENERATOR We next observed how the generator distribution is updated in order to compare the proposed method with variants of WGAN in terms of the gradients provided. Figure 5 shows typical behavior of the generator distribution trained with the proposed method on the 8-Gaussian toy dataset. The 8- Gaussian toy dataset and experimental settings are described in Appendix B.2. One can observe that, as training progresses, the generator distribution comes closer to the real-data distribution. Figure 6 shows comparison of the behaviors of the proposed method, WGAN-GP, and WGANTS. We excluded WGAN and WGAN-SN from this comparison: WGAN tended to yield generator distributions that concentrated around a single Gaussian component, and hence training did not progress well. WGAN-SN could not correctly evaluate the Wasserstein distance as in the experiment on the MNIST dataset. One can observe in Figure 6 that directions of sample updates are diverse in the proposed method, especially in later stages of training, and that the sample update directions tend to be aligned with the optimal gradient directions. These behaviors will be helpful for the generator to learn the realdata distribution efficiently. In WGAN-GP and WGAN-TS, on the other hand, the sample update directions exhibit less diversity and less alignment with the optimal gradient directions, which would make the generator distribution difficult to spread and would slow training. One would be able to ascribe such behaviors to poor quality of the critic: Those behaviors would arise when the generator learns on the basis of unreliable gradient information provided by the critic without learning sufficiently to accurately evaluate the Wasserstein distance. If one would increase the number nc of critic iterations per generator iteration in order to train the critic better, the total computational cost of training would increase. In fact, nc = 5 is recommended in practice and has commonly been used in WGAN (Arjovsky et al., 2017) and its variants because the improvement in learning of the critic is thought to be small relative to increase in computational cost. In reality, however, 5 iterations would not be sufficient for the critic to learn, and this might be a principal reason for the critic to provide poor gradient information to the generator in the variants of WGAN. 6 CONCLUSION We have proposed a new generative model that learns by directly minimizing exact empirical Wasserstein distance between the real-data distribution and the generator distribution. Since the proposed method does not suffer from the constraints on the transport cost and the 1-Lipschitz constraint imposed on WGAN by solving the optimal transport problem in the primal domain instead of the dual domain, one can construct more flexible generative modeling. The proposed method provides the generator with better gradient information to minimize the Wasserstein distance (Section 5.2) and achieved smaller empirical Wasserstein distance with lower computational cost (Section 5.1) than any other compared variants of WGAN. In the future work, we would like to investigate the behavior of the proposed method when transport cost is defined in the feature space embedded by an appropriate inception model. ACKNOWLEDGMENTS Support of anonymous funding agencies is acknowledged. A NETWORK ARCHITECTURES A.1 CONVOLUTIONAL NEURAL NETWORKS We show in Table 2 the network architecture used in the experiment on the MNIST dataset in Section 5.1. The generator network receives a 100-dimensional noise vector generated from the standard normal distribution as an input. The noise vector is passed through the fully-connected layer and reshaped to 4 × 4 feature maps. Then they are passed through four transposed convolution layers with 5× 5 kernels, stride 2 and no biases (since performance was empirically almost the same with or without biases, we took the simpler option of not considering biases), where the resolution of feature maps is doubled and the number of them is halved except for the last layer. The critic network is basically the reverse of the generator network. A convolution layer is used instead of a transposed convolution layer in the critic. After the last convolution layer, the feature maps are flattened into a vector and passed through the fully-connected layer. We employed batch normalization (Ioffe & Szegedy, 2015) in all intermediate layers in both of the generator and the critic. Rectified linear unit (ReLU) was used as the activation function in all but the last layers. As the activation function in the last layer, the hyperbolic tangent function and the identity function were used for the generator and for the critic, respectively. A.2 FULLY-CONNECTED NEURAL NETWORKS We show in Table 3 the network architecture used in the experiment on the 8-Gaussian toy dataset in Section 5.2. The generator network architecture receives a 100-dimensional noise vector as in the experiment on the MNIST dataset. The noise vector is passed through the four fully-connected layers with biases and mapped to a two-dimensional space. The critic network is likewise the reverse of the generator network. B DETAILED EXPERIMENTAL SETTING B.1 MNIST DATASET The MNIST dataset of handwritten digits used in the experiment in Section 5.1 contains 60,000 two-dimensional images of handwritten digits with resolution 28× 28. We used default parameter settings decided by the proposers of the respective methods. We used RMSProp (Hinton et al., 2012) with learning rate 5e−5 for the critic and the generator in WGAN. The weight clipping parameter c was set to 0.01. We used Adam (Kingma & Ba, 2015) with learning rate 1e−4, β1 = 0.5, β2 = 0.999 in the other methods. λgp in WGAN-GP was set to 10. In the methods with the critic, the number nc of critic iterations per generator iteration was set to 5. B.2 8-GAUUSIAN TOY DATASET The 8-Gaussian toy dataset used in the experiment in Section 5.2 is a two-dimensional synthetic dataset, which contains real-data sampled from the Gaussian mixture distribution with 8 centers equally distant from the origin and unit variance as the real-data distribution. The centers of the 8 Gaussian component distributions are (±10, 0), (0,±10), and (±10/ √ 2,±10/ √ 2). 30, 000 samples were generated in advance before training and were used as the real-data samples. In all methods, the batch size was set to 64 and the maximum number of iterations in training the generator was set to 1,000. WGAN and WGAN-SN could not learn well with this dataset, even though we considered several parameter sets. We used Adam with learning rate 1e−3, β1 = 0.5, β2 = 0.999 for WGAN-GP, WGAN-TS and the proposed method. λgp in WGAN-GP was set to 10. In the methods with the critic, the number nc of critic iterations was set to 5. B.3 EXECUTION ENVIRONMENT All the numerical experiments in this paper were executed on a computer with an Intel Core i76850K CPU (3.60 GHz, 6 cores) and 32 GB RAM, and with four GeForce GTX 1080 graphics cards installed. Linear sum assignment problems were solved using the Hungarian algorithm, which has time complexity of O(N3). Codes used in the experiments were written in tensorflow 1.10.1 on python 3.6.0, with eager execution enabled. C EVALUATION WITH FID AND IS ON MNIST DATASET We show the result of evaluation of the experimented methods with FID and IS in Table 4. Both FID and IS are commonly used to evaluate GANs. FID calculates the distance between the set of real-data points and that of fake-data points. The smaller the distance is, the better the fake-data points are judged. Assuming that the vector obtained from a fake- or real-data point through the inception model follows a multivariate Gaussian distribution, FID is defined by the following equation: FID2 = ‖µ1 − µ2‖22 + tr ( Σ1 +Σ2 − 2(Σ1Σ2) 1 2 ) , (10) where (µi,Σi) is the mean vector and the covariance matrix for dataset i, evaluated in the feature space embedded with inception scores. It is nothing but the square of the Wasserstein-2 distance between two multivariate Gaussian distributions with parameters (µ1,Σ1) and (µ2,Σ2), respectively. IS is a metric to evaluate only the set of fake-data points. Let xi be a data point, y be the label of xi in the data identification task for which the inception model was trained, p(y|xi) be the probability of label y obtained by inputting xi to the inception model. Letting X be the set of all data points used for calculating the score, the marginal probability of label y is p(y) = 1|X| ∑ xi∈X p(y|xi). IS is defined by the following equation: IS = exp ( 1 |X| ∑ xi∈X KL (p(y|xi)||p(y)) ) , (11) where KL is Kullback–Leibler divergence. IS is designed to be high as the data points are easy to identify by the inception model and variation of labels identified from the data points is abundant. In WGAN-GP, WGAN-SN and WGAN-TS*, we observed that training suddenly deteriorated in some trials. We thus used early stopping on the basis of EWD, and the results of these methods shown in Table 4 are with early stopping. The proposed method marked the worst in FID and the best in IS among all the methods compared. Certainly, the fake-data generated by the proposed method are non-sharp and do not resemble realdata points, but it seems that it is easy to distinguish them and they have diversity as digit images. If one wishes to produce higher FID results using the proposed method, transport cost should be considered in the desired space corresponding to FID.
1. What is the main contribution of the paper, and how does it relate to previous research? 2. How effective is the proposed method in improving the quality of generated samples compared to other approaches? 3. Are there any limitations or potential biases in the evaluation metrics used in the paper? 4. What are some potential directions for future research related to this topic? 5. Are there any minor errors or typos in the paper that could be corrected?
Review
Review The paper proposed to use the exact empirical Wasserstein distance to supervise the training of generative model. To this end, the authors formulated the optimal transport cost as a linear programming problem. The quantitative results-- empirical Wasserstein distance show the superiority of the proposed methods. My concerns come from both theoretical and experimental aspects: The linear-programming problem Eq.(4)-Eq.(7) has been studied in existing literature. The contribution is about combining this existing method to supervise a standard neural network parametrized generator, so I am not quite sure if this contribution is sufficient for the ICLR submission. In such a case, further experimental or theoretical study about the convergence of Algorithm 1 seems important to me. As to the experiments, firstly, EWD seems to be a little bit biased since EWD is literally used to supervise the training of the proposed method. Other quantitative metric studies can help justifying the improvement. Also, given that the paper brings the WGAN family into comparison, the large scale image dataset should be included since WGAN have already demonstrated their success. Last things, missing parentheses in step 8 of Algorithm 1 and overlength of url in references.
ICLR
Title Generative model based on minimizing exact empirical Wasserstein distance Abstract Generative Adversarial Networks (GANs) are a very powerful framework for generative modeling. However, they are often hard to train, and learning of GANs often becomes unstable. Wasserstein GAN (WGAN) is a promising framework to deal with the instability problem as it has a good convergence property. One drawback of the WGAN is that it evaluates the Wasserstein distance in the dual domain, which requires some approximation, so that it may fail to optimize the true Wasserstein distance. In this paper, we propose evaluating the exact empirical optimal transport cost efficiently in the primal domain and performing gradient descent with respect to its derivative to train the generator network. Experiments on the MNIST dataset show that our method is significantly stable to converge, and achieves the lowest Wasserstein distance among the WGAN variants at the cost of some sharpness of generated images. Experiments on the 8-Gaussian toy dataset show that better gradients for the generator are obtained in our method. In addition, the proposed method enables more flexible generative modeling than WGAN. 1 INTRODUCTION Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) are a powerful framework of generative modeling which is formulated as a minimax game between two networks: A generator network generates fake-data from some noise source and a discriminator network discriminates between fake-data and real-data. GANs can generate much more realistic images than other generative models like variational autoencoder (Kingma & Welling, 2014) or autoregressive models (van den Oord et al., 2016), and have been widely used in high-resolution image generation (Karras et al., 2018), image inpainting (Yu et al., 2018), image-to-image translation (Isola et al., 2017), to mention a few. However, GANs are often hard to train, and various ways to stabilize training have been proposed by many recent works. Nonetheless, consistently stable training of GANs remains an open problem. GANs employ the Jensen-Shannon (JS) divergence to measure the distance between the distributions of real-data and fake-data (Goodfellow et al., 2014). Arjovsky et al. (2017) provided an analysis of various distances and divergence measures between two probability distributions in view of their use as loss functions of GANs, and proposed Wasserstein GAN (WGAN) which has better theoretical properties than the original GANs. WGAN requires that the discriminator (called the critic in Arjovsky et al. (2017)) must lie within the space of 1-Lipschitz functions to evaluate the Wasserstein distance via the Kantorovich-Rubinstein dual formulation. Arjovsky et al. (2017) further proposed implementing the critic with a deep neural network and applied weight clipping in order to ensure that the critic satisfies the Lipschitz condition. However, weight clipping limits the critic’s function space and can cause gradients in the critic to explode or vanish if the clipping parameters are not carefully chosen (Arjovsky et al., 2017; Gulrajani et al., 2017). WGAN-GP (Gulrajani et al., 2017) and Spectral Normalization (SN) (Miyato et al., 2018) apply regularization and normalization, respectively, on the critic trying to make the critic 1-Lipschitz, but they fail to optimize the true Wasserstein distance. In the latest work, Liu et al. (2018) proposed a new WGAN variant to evaluate the exact empirical Wasserstein distance. They evaluate the empirical Wasserstein distance between the empirical distributions of real-data and fake-data in the discrete case of the Kantorovich-Rubinstein dual for- mulation, which can be solved efficiently because the dual problem becomes a finite-dimensional linear-programming problem. The generator network is trained using the critic network learnt to approximate the solution of the dual problem. However, the problem of approximation error by the critic network remains. In this paper, we propose a new generative model without the critic, which learns by directly evaluating gradient of the exact empirical optimal transport cost in the primal domain. The proposed method corresponds to stochastic gradient descent of the optimal transport cost. 2 WASSERSTEIN GAN Arjovsky et al. (2017) argued that JS divergences are potentially not continuous with respect to the generator’s parameters, leading to GANs training difficulty. They proposed instead using the Wasserstein-1 distance W1(q, p), which is defined as the minimum cost of transporting mass in order to transform the distribution q into the distribution p. Under mild assumptions, W1(q, p) is continuous everywhere and differentiable almost everywhere. The WGAN objective function is constructed using the Kantorovich-Rubinstein duality (Villani, 2009, Chapter 5) as W1(Pr,Pg) = max D∈D { Ex∼Pr [D(x)]− Ey∼Pg [D(y)] } , (1) to obtain min G max D∈D { Ex∼Pr [D(x)]− Ey∼Pg [D(y)] } , (2) where D is the set of all 1-Lipschitz functions, where Pr is the real-data distribution, and where Pg is the generator distribution implicitly defined by y = G(z), z ∼ p(z). Minimization of this objective function with respect to G with optimal D is equivalent to minimizing W1(Pr,Pg). Arjovsky et al. (2017) further proposed implementing the critic D in terms of a deep neural network with weight clipping. Weight clipping keeps the weight parameter of the network lying in a compact space, thereby ensuring the desired Lipschitz condition. For a fixed network architecture, however, weight clipping may significantly limit the function space to a quite small fraction of all possible 1-Lipschitz functions representable by networks with the prescribed architecture. 3 RELATED WORKS Gulrajani et al. (2017) proposed introduction of gradient penalty (GP) to the WGAN objective function in place of the 1-Lipschitz condition in the Kantorovich-Rubinstein dual formulation, in order to explicitly encourage the critic to have gradients with magnitude equal to 1. Since enforcing the constraint of unit-norm gradient everywhere is intractable, they proposed enforcing the constraint only along straight line segments, each connecting a real-data point and a fake-data point. The resulting learning scheme, which is called the WGAN-GP, was shown to perform well experimentally. It was pointed out, however (Miyato et al., 2018), that WGAN-GP is susceptible to destabilization due to gradual changes of the support of the generator distribution as learning progresses. Furthermore, the critic can easily violate the Lipschitz condition in practice, so that there is no guarantee that WGAN-GP optimizes the true Wasserstein distance. SN, proposed by Miyato et al. (2018), is based on the observation that the Lipschitz norm of a critic represented by a multilayer neural network is bounded from above by the product, across all layers, of the Lipschitz norms of the activation functions and the spectral norms of the weight matrices, and normalizes each of the weight matrices with its spectral norm to ensure the resulting critic to satisfy the desired Lipschitz condition. It is well known that, for any m× n matrix W = (wij), the max norm ‖W‖max = max{|wij |} and the spectral norm σ(W ) satisfy the inequality ‖W‖max ≤ σ(W ) ≤ √mn‖W‖max. This implies that the bound of the Lipschitz constant provided via weight clipping can be loose compared with that via SN. In other words, SN is expected to provide a much tighter bound for the Lipschitz condition than weight clipping, and accordingly, the function space for the critic under SN is larger than that under weight clipping. The function space under SN is, however, still a subset of the set of all functions satisfying the Lipschitz condition, and consequently, the resulting estimate for the Wasserstein distance is a lower bound of the true Wasserstein distance. Furthermore, one cannot tell within the framework of SN how good the estimate is. Liu et al. (2018) proposed a new formulation to evaluate the Wasserstein distance, which is equivalent to the discrete case of the Kantorovich-Rubinstein dual formulation under a mild assumption and is more tractable due to obviating the need for the Lipschitz condition. This problem is solved in a two-step fashion, and thus the method proposed in Liu et al. (2018) is called the WGAN-TS. First, one estimates the Wasserstein distance on the basis of finite real- and fake-data points. The empirical Wasserstein distance is evaluated exactly via solving the linear-programming version of the Kantorovich-Rubinstein dual to obtain the optimizer. Second, one approximates the optimizer obtained in the first step via regression using a deep neural network to parameterize the critic and obtains its gradient. WGAN-TS can evaluate the Wasserstein distance more accurately than WGAN, WGAN-GP and WGAN-SN (WGAN with SN), but there are not only approximation errors from using finite samples to evaluate the empirical Wasserstein distance but also those from deep regression in the second step, resulting in not being able to minimize the Wasserstein distance directly. 4 PROPOSED METHOD The proposed method in this paper is based on the fact that the optimal transport cost between two probability distributions can be evaluated efficiently when the distributions are uniform over finite sets of the same cardinality. Our proposal is to evaluate empirical optimal transport costs on the basis of equal-size sample datasets of real- and fake-data points. The optimal transport cost between the real-data distribution Pr and the generator distribution Pg is defined as C(Pr,Pg) = inf γ∈Π(Pr,Pg) E(x,y)∼γ [c(x, y)], (3) where c(x, y) is the cost of transporting one unit mass from x to y, assumed differentiable with respect to its arguments almost everywhere, and where Π(Pr,Pg) denotes the set of all couplings between Pr and Pg , that is, all joint probability distributions that have marginals Pr and Pg . Let D = {xj |xj ∼ Pr(x)} be a dataset consisting of independent and identically-distributed (iid) real-data points, and F = {yi|yi ∼ Pg(y)} be a dataset consisting of iid fake-data points sampled from the generator. Let PD and PF be the empirical distributions defined by the datasets D and F , respectively. We further assume in the following that |D| = |F | = N holds. The empirical optimal transport cost Ĉ(D,F ) = C(PD,PF ) between the two datasets D and F is formulated as a linear-programming problem, as Ĉ(D,F ) = C(PD,PF ) = 1 N min M N ∑ i=1 N ∑ j=1 Mi,jc(xj , yi) (4) s.t. N ∑ j=1 Mi,j = 1, ∀i ∈ {1, . . . , N}, (5) N ∑ i=1 Mi,j = 1, ∀j ∈ {1, . . . , N}, (6) Mi,j ≥ 0, ∀i ∈ {1, . . . , N}, ∀j ∈ {1, . . . , N}. (7) It is known (Villani, 2003) that the linear-programming problem (4)–(7) admits solutions which are permutation matrices. One can then replace the constraints Mi,j ≥ 0 in (7) with Mi,j ∈ {0, 1} without affecting the optimality. The resulting optimization problem is what is called the linear sum assignment problem, which can be solved more efficiently than the original linear-programming problem. As far as the authors’ knowledge, the most efficient algorithm to date for solving a linear sum assignment problem has time complexity of O(N2.5 log(NC)), where C = maxi,j c(xj , yi) when one scales up the costs {c(xj , yi)|xj ∈ D, yi ∈ F} to integers (Burkard et al., 2012, Chapter 4). This is a problem to find the optimal transport plan, where Mi,j = 1 is corresponding to transporting fake-data point yi ∈ F to real-data point xj ∈ D, and where the objective is to minimize the average transport cost N−1 ∑N i=1 ∑N j=1 Mi,jc(xj , yi). Figure 1 shows a two-dimensional example of this problem and its solution. One requires evaluations not only of the optimal transport cost C(Pr,Pg) but also of its derivative in order to perform learning of the generator with backpropagation. Let θ denote the parameter of the generator, and let ∂θC denote the derivative of the optimal transport cost C with respect to θ. Conditional on z, the generator output G(z) is a function of θ. Hence, in order to estimate ∂θC, in our framework one has to evaluate ∂θĈ. In general, it is difficult to differentiate (4) with respect to generator output yi, as the optimal transport plan M ∗ can be highly dependent on yi. Under the assumption |D| = |F | = N which we adopt here, however, the feasible set for M is the set of all permutation matrices and is a finite set. It then follows that, as a generic property, the optimal transport plan M∗ is unchanged under small enough perturbations of F (see Figure 1). We take advantage of this fact and regard M∗ as independent of yi. Now that differentiation of (4) becomes tractable, we use (4) as the loss function of the generator and update the generator with the direct gradient of the empirical optimal transport cost, as ∂θĈ = N −1 ∑N i,j=1 M ∗ i,j∂yic(xj , yi)∂θG(zi). Although the framework described so far is applicable to any optimal transport cost, several desirable properties can be stated if one specializes in the Wasserstein distance. Assume, for a given p ≥ 1, that the real-data distribution Pr and the generator distribution Pg have finite moments of order p. The Wasserstein-p distance between Pr and Pg is defined in terms of the optimal transport cost with c(x, y) = ‖x− y‖p as Wp(Pr,Pg) = C(Pr,Pg) 1/p. (8) Due to the law of large numbers, the empirical distributions PD and PF converge weakly to Pr and Pg , respectively, as N → ∞. It is also known (Villani, 2009, Theorem 6.9) that the Wasserstein-p distance Wp metrizes the space of probability measures with finite moments of order p. Consequently, the empirical Wasserstein distance Ŵp(D,F ) is a consistent estimator of the true Wasserstein distance Wp(Pr,Pg). Furthermore, with the upper bound of the error of the estimator |Ŵp(D,F )−Wp(Pr,Pg)| ≤Wp(PD,Pr) +Wp(PF ,Pg), (9) which is derived on the basis of the triangle inequality, as well as with the upper bounds available for expectations of Wp(PD,Pr) and Wp(PF ,Pg) under mild conditions (Weed & Bach, 2017), one can see that Ŵp(D,F ) is an asymptotically unbiased estimator of Wp(Pr,Pg). Note that our method can directly evaluate the empirical Wasserstein distance without recourse to the Kantorovich-Rubinstein dual. Hence, our method does not use a critic and is therefore no longer a GAN. It is also applicable to any optimal transport cost. We summarize the proposed method in Algorithm 1. 5 EXPERIMENTS 5.1 RESULTS ON MNIST WITH CONVOLUTIONAL NEURAL NETWORK We first show experimental results on the MNIST dataset of handwritten digits. In this experiment, we resized the images to resolution 64 × 64 so that we can use the convolutional neural networks Algorithm 1 The proposed method. 1: Input: Real-data samples Xreal, batch size N , Adam parameters α, β1, β2 2: Output: Gθ 3: Initialize θ. 4: while θ has not converged do 5: Sample {xi}i∈{1,...,N} ∼ Xreal from real-data. 6: Sample {zj}j∈{1,...,N} ∼ p(z) from random noises. 7: Let yj = Gθ(zj), ∀j ∈ {1, . . . , N}. 8: Solve (4)–7 to obtain M∗. 9: gθ ← ∂θĈ = N−1 ∑N i,j=1 M ∗ i,j∂yic(xj , yi)∂θGθ(zi) 10: θ ← Adam(gθ, θ, α, β1, β2) 11: end while described in Appendix A.1 as the critic and the generator. In all methods, the batch size was set to 64 and the prior noise distribution was the 100-dimensional standard normal distribution. The maximum number of iterations in training of the generator was set to 30,000. The Wasserstein-1 distance with c(x, y) = ‖x− y‖1 was used. More detailed settings are described in Appendix B.1. Although several performance metrics have been proposed and are commonly used to evaluate variants of WGAN, we have decided to use the empirical Wasserstein distance (EWD) to compare performance of all methods. It is because all the methods adopt objective functions that are based on the Wasserstein distance, and because EWD is a consistent and asymptotically unbiased estimator of the Wasserstein distance and can efficiently be evaluated, as discussed in Section 4. Table 1 shows EWD evaluated with 256 samples and computation time per generator update for each method. For reference, performance comparison with the Fréchet Inception Distance (Heusel et al., 2017) and the Inception Score (Salimans et al., 2016), which are commonly used as performance measures to evaluate GANs using feature space embedding with an inception model, is shown in Appendix C. The proposed method achieved a remarkably small EWD and computational cost compared with the variants of WGAN. Our method required the lowest computational cost in this experimental setting mainly because it does not use the critic. Although we think that the batch size used in the experiment of the proposed method was appropriate since the proposed method achieved lower EWD, if a larger batch size would be required in training, it will take much longer time to solve the linear sum assignment problem (4)–(7). We further investigated behaviors of the methods compared in more detail, on the basis of EWD. WGAN-SN failed to learn. The loss function of the critic showed divergent movement toward −∞, and the behaviors of EWD in different trials were different even though the behaviors of the critic loss were the same (Figure 2 (a) and (b)). WGAN training never failed in 5 trials, and EWD improved stably without sudden deterioration. Although training with WGAN-GP proceeded favorably in initial stages, at certain points the gradient penalty term started to increase, causing EWD to deteriorate (Figure 2 (c)). This happened in all 5 trials. Since gradient penalty is a weaker restriction than weight clipping, the critic may be more likely to cause extreme behaviors. We examined both WGAN-TS with and without weight scaling. Whereas WGAN-TS with weight scaling did not fail in training but achieved higher EWD than WGAN, WGAN-TS without weight scaling achieved lower EWD than WGAN at the cost of the stability of training (Figure 3). The proposed method was trained stably and never failed in 5 trials. As mentioned in Section 3, the critic in WGAN-TS simply regresses the optimizer of the empirical version of the Kantorovich-Rubinstein dual. Thus, there is no guarantee that the critic will satisfy the 1-Lipschitz condition. Liu et al. (2018) pointed out that it is indeed practically problematic with WGAN-TS, and proposed weight scaling to ensure that the critic satisfies the desired condition. We have empirically found, however, that weight scaling exhibited the following trade-off (Figure 3). Without weight scaling, training of WGAN-TS suddenly deteriorated in some trials because the critic came to not satisfy the Lipschitz condition. With weight scaling, on the other hand, the regression error of the critic with respect to the solution increased and the EWD became worse. The proposed method directly solves the empirical version of the optimal transport problem in the primal domain, so that it is free from such trade-off. Figure 4 shows fake-data images generated by the generators trained with WGAN, WGAN-GP, WGAN-TS, and the proposed method. Although one can identify the digits for the generated images with the proposed method most easily, these images are less sharp. Among the generated images with the other methods, one can notice several images which have almost the same appearance as real-data images, whereas in the proposed method, such fake-data images are not seen and images that seem averaged real-data images belonging to the same class often appear. This might imply that merely minimizing the Wasserstein distance between the real-data distribution and the generator distribution in the raw-image space may not necessarily produce realistic images. 5.2 GRADIENT OPTIMALITY OF GENERATOR We next observed how the generator distribution is updated in order to compare the proposed method with variants of WGAN in terms of the gradients provided. Figure 5 shows typical behavior of the generator distribution trained with the proposed method on the 8-Gaussian toy dataset. The 8- Gaussian toy dataset and experimental settings are described in Appendix B.2. One can observe that, as training progresses, the generator distribution comes closer to the real-data distribution. Figure 6 shows comparison of the behaviors of the proposed method, WGAN-GP, and WGANTS. We excluded WGAN and WGAN-SN from this comparison: WGAN tended to yield generator distributions that concentrated around a single Gaussian component, and hence training did not progress well. WGAN-SN could not correctly evaluate the Wasserstein distance as in the experiment on the MNIST dataset. One can observe in Figure 6 that directions of sample updates are diverse in the proposed method, especially in later stages of training, and that the sample update directions tend to be aligned with the optimal gradient directions. These behaviors will be helpful for the generator to learn the realdata distribution efficiently. In WGAN-GP and WGAN-TS, on the other hand, the sample update directions exhibit less diversity and less alignment with the optimal gradient directions, which would make the generator distribution difficult to spread and would slow training. One would be able to ascribe such behaviors to poor quality of the critic: Those behaviors would arise when the generator learns on the basis of unreliable gradient information provided by the critic without learning sufficiently to accurately evaluate the Wasserstein distance. If one would increase the number nc of critic iterations per generator iteration in order to train the critic better, the total computational cost of training would increase. In fact, nc = 5 is recommended in practice and has commonly been used in WGAN (Arjovsky et al., 2017) and its variants because the improvement in learning of the critic is thought to be small relative to increase in computational cost. In reality, however, 5 iterations would not be sufficient for the critic to learn, and this might be a principal reason for the critic to provide poor gradient information to the generator in the variants of WGAN. 6 CONCLUSION We have proposed a new generative model that learns by directly minimizing exact empirical Wasserstein distance between the real-data distribution and the generator distribution. Since the proposed method does not suffer from the constraints on the transport cost and the 1-Lipschitz constraint imposed on WGAN by solving the optimal transport problem in the primal domain instead of the dual domain, one can construct more flexible generative modeling. The proposed method provides the generator with better gradient information to minimize the Wasserstein distance (Section 5.2) and achieved smaller empirical Wasserstein distance with lower computational cost (Section 5.1) than any other compared variants of WGAN. In the future work, we would like to investigate the behavior of the proposed method when transport cost is defined in the feature space embedded by an appropriate inception model. ACKNOWLEDGMENTS Support of anonymous funding agencies is acknowledged. A NETWORK ARCHITECTURES A.1 CONVOLUTIONAL NEURAL NETWORKS We show in Table 2 the network architecture used in the experiment on the MNIST dataset in Section 5.1. The generator network receives a 100-dimensional noise vector generated from the standard normal distribution as an input. The noise vector is passed through the fully-connected layer and reshaped to 4 × 4 feature maps. Then they are passed through four transposed convolution layers with 5× 5 kernels, stride 2 and no biases (since performance was empirically almost the same with or without biases, we took the simpler option of not considering biases), where the resolution of feature maps is doubled and the number of them is halved except for the last layer. The critic network is basically the reverse of the generator network. A convolution layer is used instead of a transposed convolution layer in the critic. After the last convolution layer, the feature maps are flattened into a vector and passed through the fully-connected layer. We employed batch normalization (Ioffe & Szegedy, 2015) in all intermediate layers in both of the generator and the critic. Rectified linear unit (ReLU) was used as the activation function in all but the last layers. As the activation function in the last layer, the hyperbolic tangent function and the identity function were used for the generator and for the critic, respectively. A.2 FULLY-CONNECTED NEURAL NETWORKS We show in Table 3 the network architecture used in the experiment on the 8-Gaussian toy dataset in Section 5.2. The generator network architecture receives a 100-dimensional noise vector as in the experiment on the MNIST dataset. The noise vector is passed through the four fully-connected layers with biases and mapped to a two-dimensional space. The critic network is likewise the reverse of the generator network. B DETAILED EXPERIMENTAL SETTING B.1 MNIST DATASET The MNIST dataset of handwritten digits used in the experiment in Section 5.1 contains 60,000 two-dimensional images of handwritten digits with resolution 28× 28. We used default parameter settings decided by the proposers of the respective methods. We used RMSProp (Hinton et al., 2012) with learning rate 5e−5 for the critic and the generator in WGAN. The weight clipping parameter c was set to 0.01. We used Adam (Kingma & Ba, 2015) with learning rate 1e−4, β1 = 0.5, β2 = 0.999 in the other methods. λgp in WGAN-GP was set to 10. In the methods with the critic, the number nc of critic iterations per generator iteration was set to 5. B.2 8-GAUUSIAN TOY DATASET The 8-Gaussian toy dataset used in the experiment in Section 5.2 is a two-dimensional synthetic dataset, which contains real-data sampled from the Gaussian mixture distribution with 8 centers equally distant from the origin and unit variance as the real-data distribution. The centers of the 8 Gaussian component distributions are (±10, 0), (0,±10), and (±10/ √ 2,±10/ √ 2). 30, 000 samples were generated in advance before training and were used as the real-data samples. In all methods, the batch size was set to 64 and the maximum number of iterations in training the generator was set to 1,000. WGAN and WGAN-SN could not learn well with this dataset, even though we considered several parameter sets. We used Adam with learning rate 1e−3, β1 = 0.5, β2 = 0.999 for WGAN-GP, WGAN-TS and the proposed method. λgp in WGAN-GP was set to 10. In the methods with the critic, the number nc of critic iterations was set to 5. B.3 EXECUTION ENVIRONMENT All the numerical experiments in this paper were executed on a computer with an Intel Core i76850K CPU (3.60 GHz, 6 cores) and 32 GB RAM, and with four GeForce GTX 1080 graphics cards installed. Linear sum assignment problems were solved using the Hungarian algorithm, which has time complexity of O(N3). Codes used in the experiments were written in tensorflow 1.10.1 on python 3.6.0, with eager execution enabled. C EVALUATION WITH FID AND IS ON MNIST DATASET We show the result of evaluation of the experimented methods with FID and IS in Table 4. Both FID and IS are commonly used to evaluate GANs. FID calculates the distance between the set of real-data points and that of fake-data points. The smaller the distance is, the better the fake-data points are judged. Assuming that the vector obtained from a fake- or real-data point through the inception model follows a multivariate Gaussian distribution, FID is defined by the following equation: FID2 = ‖µ1 − µ2‖22 + tr ( Σ1 +Σ2 − 2(Σ1Σ2) 1 2 ) , (10) where (µi,Σi) is the mean vector and the covariance matrix for dataset i, evaluated in the feature space embedded with inception scores. It is nothing but the square of the Wasserstein-2 distance between two multivariate Gaussian distributions with parameters (µ1,Σ1) and (µ2,Σ2), respectively. IS is a metric to evaluate only the set of fake-data points. Let xi be a data point, y be the label of xi in the data identification task for which the inception model was trained, p(y|xi) be the probability of label y obtained by inputting xi to the inception model. Letting X be the set of all data points used for calculating the score, the marginal probability of label y is p(y) = 1|X| ∑ xi∈X p(y|xi). IS is defined by the following equation: IS = exp ( 1 |X| ∑ xi∈X KL (p(y|xi)||p(y)) ) , (11) where KL is Kullback–Leibler divergence. IS is designed to be high as the data points are easy to identify by the inception model and variation of labels identified from the data points is abundant. In WGAN-GP, WGAN-SN and WGAN-TS*, we observed that training suddenly deteriorated in some trials. We thus used early stopping on the basis of EWD, and the results of these methods shown in Table 4 are with early stopping. The proposed method marked the worst in FID and the best in IS among all the methods compared. Certainly, the fake-data generated by the proposed method are non-sharp and do not resemble realdata points, but it seems that it is easy to distinguish them and they have diversity as digit images. If one wishes to produce higher FID results using the proposed method, transport cost should be considered in the desired space corresponding to FID.
1. What is the main contribution of the paper regarding generative models? 2. What are the limitations of the proposed approach, particularly in scaling and sample requirements? 3. Have there been prior works that have tried and studied this approach extensively? If so, what are their findings? 4. How does the reviewer assess the novelty and practicality of the paper's content? 5. Are there any specific results or design choices that could change the reviewer's perspective on the paper's value?
Review
Review The authors propose to estimate and minimize the empirical Wasserstein distance between batches of samples of real and fake data, then calculate a (sub) gradient of it with respect to the generator's parameters and use it to train generative models. This is an approach that has been tried[1,2] (even with the addition of entropy regularization) and studied [1-5] extensively. It doesn't scale, and for extremely well understood reasons[2,3]. The bias of the empirical Wasserstein estimate requires an exponential number of samples as the number of dimensions increases to reach a certain amount of error [2-6]. Indeed, it requires an exponential number of samples to even differentiate between two batches of the same Gaussian[4]. On top of these arguments, the results do not suggest any new finding or that these theoretical limitations would not be relevant in practice. If the authors have results and design choices making this method work in a high dimensional problem such as LSUN, I will revise my review. [1]: https://arxiv.org/abs/1706.00292 [2]: https://arxiv.org/abs/1708.02511 [3]: https://arxiv.org/abs/1712.07822 [4]: https://arxiv.org/abs/1703.00573 [5]: http://www.gatsby.ucl.ac.uk/~gretton/papers/SriFukGreSchetal12.pdf [6]: https://www.sciencedirect.com/science/article/pii/0377042794900337
ICLR
Title Generative model based on minimizing exact empirical Wasserstein distance Abstract Generative Adversarial Networks (GANs) are a very powerful framework for generative modeling. However, they are often hard to train, and learning of GANs often becomes unstable. Wasserstein GAN (WGAN) is a promising framework to deal with the instability problem as it has a good convergence property. One drawback of the WGAN is that it evaluates the Wasserstein distance in the dual domain, which requires some approximation, so that it may fail to optimize the true Wasserstein distance. In this paper, we propose evaluating the exact empirical optimal transport cost efficiently in the primal domain and performing gradient descent with respect to its derivative to train the generator network. Experiments on the MNIST dataset show that our method is significantly stable to converge, and achieves the lowest Wasserstein distance among the WGAN variants at the cost of some sharpness of generated images. Experiments on the 8-Gaussian toy dataset show that better gradients for the generator are obtained in our method. In addition, the proposed method enables more flexible generative modeling than WGAN. 1 INTRODUCTION Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) are a powerful framework of generative modeling which is formulated as a minimax game between two networks: A generator network generates fake-data from some noise source and a discriminator network discriminates between fake-data and real-data. GANs can generate much more realistic images than other generative models like variational autoencoder (Kingma & Welling, 2014) or autoregressive models (van den Oord et al., 2016), and have been widely used in high-resolution image generation (Karras et al., 2018), image inpainting (Yu et al., 2018), image-to-image translation (Isola et al., 2017), to mention a few. However, GANs are often hard to train, and various ways to stabilize training have been proposed by many recent works. Nonetheless, consistently stable training of GANs remains an open problem. GANs employ the Jensen-Shannon (JS) divergence to measure the distance between the distributions of real-data and fake-data (Goodfellow et al., 2014). Arjovsky et al. (2017) provided an analysis of various distances and divergence measures between two probability distributions in view of their use as loss functions of GANs, and proposed Wasserstein GAN (WGAN) which has better theoretical properties than the original GANs. WGAN requires that the discriminator (called the critic in Arjovsky et al. (2017)) must lie within the space of 1-Lipschitz functions to evaluate the Wasserstein distance via the Kantorovich-Rubinstein dual formulation. Arjovsky et al. (2017) further proposed implementing the critic with a deep neural network and applied weight clipping in order to ensure that the critic satisfies the Lipschitz condition. However, weight clipping limits the critic’s function space and can cause gradients in the critic to explode or vanish if the clipping parameters are not carefully chosen (Arjovsky et al., 2017; Gulrajani et al., 2017). WGAN-GP (Gulrajani et al., 2017) and Spectral Normalization (SN) (Miyato et al., 2018) apply regularization and normalization, respectively, on the critic trying to make the critic 1-Lipschitz, but they fail to optimize the true Wasserstein distance. In the latest work, Liu et al. (2018) proposed a new WGAN variant to evaluate the exact empirical Wasserstein distance. They evaluate the empirical Wasserstein distance between the empirical distributions of real-data and fake-data in the discrete case of the Kantorovich-Rubinstein dual for- mulation, which can be solved efficiently because the dual problem becomes a finite-dimensional linear-programming problem. The generator network is trained using the critic network learnt to approximate the solution of the dual problem. However, the problem of approximation error by the critic network remains. In this paper, we propose a new generative model without the critic, which learns by directly evaluating gradient of the exact empirical optimal transport cost in the primal domain. The proposed method corresponds to stochastic gradient descent of the optimal transport cost. 2 WASSERSTEIN GAN Arjovsky et al. (2017) argued that JS divergences are potentially not continuous with respect to the generator’s parameters, leading to GANs training difficulty. They proposed instead using the Wasserstein-1 distance W1(q, p), which is defined as the minimum cost of transporting mass in order to transform the distribution q into the distribution p. Under mild assumptions, W1(q, p) is continuous everywhere and differentiable almost everywhere. The WGAN objective function is constructed using the Kantorovich-Rubinstein duality (Villani, 2009, Chapter 5) as W1(Pr,Pg) = max D∈D { Ex∼Pr [D(x)]− Ey∼Pg [D(y)] } , (1) to obtain min G max D∈D { Ex∼Pr [D(x)]− Ey∼Pg [D(y)] } , (2) where D is the set of all 1-Lipschitz functions, where Pr is the real-data distribution, and where Pg is the generator distribution implicitly defined by y = G(z), z ∼ p(z). Minimization of this objective function with respect to G with optimal D is equivalent to minimizing W1(Pr,Pg). Arjovsky et al. (2017) further proposed implementing the critic D in terms of a deep neural network with weight clipping. Weight clipping keeps the weight parameter of the network lying in a compact space, thereby ensuring the desired Lipschitz condition. For a fixed network architecture, however, weight clipping may significantly limit the function space to a quite small fraction of all possible 1-Lipschitz functions representable by networks with the prescribed architecture. 3 RELATED WORKS Gulrajani et al. (2017) proposed introduction of gradient penalty (GP) to the WGAN objective function in place of the 1-Lipschitz condition in the Kantorovich-Rubinstein dual formulation, in order to explicitly encourage the critic to have gradients with magnitude equal to 1. Since enforcing the constraint of unit-norm gradient everywhere is intractable, they proposed enforcing the constraint only along straight line segments, each connecting a real-data point and a fake-data point. The resulting learning scheme, which is called the WGAN-GP, was shown to perform well experimentally. It was pointed out, however (Miyato et al., 2018), that WGAN-GP is susceptible to destabilization due to gradual changes of the support of the generator distribution as learning progresses. Furthermore, the critic can easily violate the Lipschitz condition in practice, so that there is no guarantee that WGAN-GP optimizes the true Wasserstein distance. SN, proposed by Miyato et al. (2018), is based on the observation that the Lipschitz norm of a critic represented by a multilayer neural network is bounded from above by the product, across all layers, of the Lipschitz norms of the activation functions and the spectral norms of the weight matrices, and normalizes each of the weight matrices with its spectral norm to ensure the resulting critic to satisfy the desired Lipschitz condition. It is well known that, for any m× n matrix W = (wij), the max norm ‖W‖max = max{|wij |} and the spectral norm σ(W ) satisfy the inequality ‖W‖max ≤ σ(W ) ≤ √mn‖W‖max. This implies that the bound of the Lipschitz constant provided via weight clipping can be loose compared with that via SN. In other words, SN is expected to provide a much tighter bound for the Lipschitz condition than weight clipping, and accordingly, the function space for the critic under SN is larger than that under weight clipping. The function space under SN is, however, still a subset of the set of all functions satisfying the Lipschitz condition, and consequently, the resulting estimate for the Wasserstein distance is a lower bound of the true Wasserstein distance. Furthermore, one cannot tell within the framework of SN how good the estimate is. Liu et al. (2018) proposed a new formulation to evaluate the Wasserstein distance, which is equivalent to the discrete case of the Kantorovich-Rubinstein dual formulation under a mild assumption and is more tractable due to obviating the need for the Lipschitz condition. This problem is solved in a two-step fashion, and thus the method proposed in Liu et al. (2018) is called the WGAN-TS. First, one estimates the Wasserstein distance on the basis of finite real- and fake-data points. The empirical Wasserstein distance is evaluated exactly via solving the linear-programming version of the Kantorovich-Rubinstein dual to obtain the optimizer. Second, one approximates the optimizer obtained in the first step via regression using a deep neural network to parameterize the critic and obtains its gradient. WGAN-TS can evaluate the Wasserstein distance more accurately than WGAN, WGAN-GP and WGAN-SN (WGAN with SN), but there are not only approximation errors from using finite samples to evaluate the empirical Wasserstein distance but also those from deep regression in the second step, resulting in not being able to minimize the Wasserstein distance directly. 4 PROPOSED METHOD The proposed method in this paper is based on the fact that the optimal transport cost between two probability distributions can be evaluated efficiently when the distributions are uniform over finite sets of the same cardinality. Our proposal is to evaluate empirical optimal transport costs on the basis of equal-size sample datasets of real- and fake-data points. The optimal transport cost between the real-data distribution Pr and the generator distribution Pg is defined as C(Pr,Pg) = inf γ∈Π(Pr,Pg) E(x,y)∼γ [c(x, y)], (3) where c(x, y) is the cost of transporting one unit mass from x to y, assumed differentiable with respect to its arguments almost everywhere, and where Π(Pr,Pg) denotes the set of all couplings between Pr and Pg , that is, all joint probability distributions that have marginals Pr and Pg . Let D = {xj |xj ∼ Pr(x)} be a dataset consisting of independent and identically-distributed (iid) real-data points, and F = {yi|yi ∼ Pg(y)} be a dataset consisting of iid fake-data points sampled from the generator. Let PD and PF be the empirical distributions defined by the datasets D and F , respectively. We further assume in the following that |D| = |F | = N holds. The empirical optimal transport cost Ĉ(D,F ) = C(PD,PF ) between the two datasets D and F is formulated as a linear-programming problem, as Ĉ(D,F ) = C(PD,PF ) = 1 N min M N ∑ i=1 N ∑ j=1 Mi,jc(xj , yi) (4) s.t. N ∑ j=1 Mi,j = 1, ∀i ∈ {1, . . . , N}, (5) N ∑ i=1 Mi,j = 1, ∀j ∈ {1, . . . , N}, (6) Mi,j ≥ 0, ∀i ∈ {1, . . . , N}, ∀j ∈ {1, . . . , N}. (7) It is known (Villani, 2003) that the linear-programming problem (4)–(7) admits solutions which are permutation matrices. One can then replace the constraints Mi,j ≥ 0 in (7) with Mi,j ∈ {0, 1} without affecting the optimality. The resulting optimization problem is what is called the linear sum assignment problem, which can be solved more efficiently than the original linear-programming problem. As far as the authors’ knowledge, the most efficient algorithm to date for solving a linear sum assignment problem has time complexity of O(N2.5 log(NC)), where C = maxi,j c(xj , yi) when one scales up the costs {c(xj , yi)|xj ∈ D, yi ∈ F} to integers (Burkard et al., 2012, Chapter 4). This is a problem to find the optimal transport plan, where Mi,j = 1 is corresponding to transporting fake-data point yi ∈ F to real-data point xj ∈ D, and where the objective is to minimize the average transport cost N−1 ∑N i=1 ∑N j=1 Mi,jc(xj , yi). Figure 1 shows a two-dimensional example of this problem and its solution. One requires evaluations not only of the optimal transport cost C(Pr,Pg) but also of its derivative in order to perform learning of the generator with backpropagation. Let θ denote the parameter of the generator, and let ∂θC denote the derivative of the optimal transport cost C with respect to θ. Conditional on z, the generator output G(z) is a function of θ. Hence, in order to estimate ∂θC, in our framework one has to evaluate ∂θĈ. In general, it is difficult to differentiate (4) with respect to generator output yi, as the optimal transport plan M ∗ can be highly dependent on yi. Under the assumption |D| = |F | = N which we adopt here, however, the feasible set for M is the set of all permutation matrices and is a finite set. It then follows that, as a generic property, the optimal transport plan M∗ is unchanged under small enough perturbations of F (see Figure 1). We take advantage of this fact and regard M∗ as independent of yi. Now that differentiation of (4) becomes tractable, we use (4) as the loss function of the generator and update the generator with the direct gradient of the empirical optimal transport cost, as ∂θĈ = N −1 ∑N i,j=1 M ∗ i,j∂yic(xj , yi)∂θG(zi). Although the framework described so far is applicable to any optimal transport cost, several desirable properties can be stated if one specializes in the Wasserstein distance. Assume, for a given p ≥ 1, that the real-data distribution Pr and the generator distribution Pg have finite moments of order p. The Wasserstein-p distance between Pr and Pg is defined in terms of the optimal transport cost with c(x, y) = ‖x− y‖p as Wp(Pr,Pg) = C(Pr,Pg) 1/p. (8) Due to the law of large numbers, the empirical distributions PD and PF converge weakly to Pr and Pg , respectively, as N → ∞. It is also known (Villani, 2009, Theorem 6.9) that the Wasserstein-p distance Wp metrizes the space of probability measures with finite moments of order p. Consequently, the empirical Wasserstein distance Ŵp(D,F ) is a consistent estimator of the true Wasserstein distance Wp(Pr,Pg). Furthermore, with the upper bound of the error of the estimator |Ŵp(D,F )−Wp(Pr,Pg)| ≤Wp(PD,Pr) +Wp(PF ,Pg), (9) which is derived on the basis of the triangle inequality, as well as with the upper bounds available for expectations of Wp(PD,Pr) and Wp(PF ,Pg) under mild conditions (Weed & Bach, 2017), one can see that Ŵp(D,F ) is an asymptotically unbiased estimator of Wp(Pr,Pg). Note that our method can directly evaluate the empirical Wasserstein distance without recourse to the Kantorovich-Rubinstein dual. Hence, our method does not use a critic and is therefore no longer a GAN. It is also applicable to any optimal transport cost. We summarize the proposed method in Algorithm 1. 5 EXPERIMENTS 5.1 RESULTS ON MNIST WITH CONVOLUTIONAL NEURAL NETWORK We first show experimental results on the MNIST dataset of handwritten digits. In this experiment, we resized the images to resolution 64 × 64 so that we can use the convolutional neural networks Algorithm 1 The proposed method. 1: Input: Real-data samples Xreal, batch size N , Adam parameters α, β1, β2 2: Output: Gθ 3: Initialize θ. 4: while θ has not converged do 5: Sample {xi}i∈{1,...,N} ∼ Xreal from real-data. 6: Sample {zj}j∈{1,...,N} ∼ p(z) from random noises. 7: Let yj = Gθ(zj), ∀j ∈ {1, . . . , N}. 8: Solve (4)–7 to obtain M∗. 9: gθ ← ∂θĈ = N−1 ∑N i,j=1 M ∗ i,j∂yic(xj , yi)∂θGθ(zi) 10: θ ← Adam(gθ, θ, α, β1, β2) 11: end while described in Appendix A.1 as the critic and the generator. In all methods, the batch size was set to 64 and the prior noise distribution was the 100-dimensional standard normal distribution. The maximum number of iterations in training of the generator was set to 30,000. The Wasserstein-1 distance with c(x, y) = ‖x− y‖1 was used. More detailed settings are described in Appendix B.1. Although several performance metrics have been proposed and are commonly used to evaluate variants of WGAN, we have decided to use the empirical Wasserstein distance (EWD) to compare performance of all methods. It is because all the methods adopt objective functions that are based on the Wasserstein distance, and because EWD is a consistent and asymptotically unbiased estimator of the Wasserstein distance and can efficiently be evaluated, as discussed in Section 4. Table 1 shows EWD evaluated with 256 samples and computation time per generator update for each method. For reference, performance comparison with the Fréchet Inception Distance (Heusel et al., 2017) and the Inception Score (Salimans et al., 2016), which are commonly used as performance measures to evaluate GANs using feature space embedding with an inception model, is shown in Appendix C. The proposed method achieved a remarkably small EWD and computational cost compared with the variants of WGAN. Our method required the lowest computational cost in this experimental setting mainly because it does not use the critic. Although we think that the batch size used in the experiment of the proposed method was appropriate since the proposed method achieved lower EWD, if a larger batch size would be required in training, it will take much longer time to solve the linear sum assignment problem (4)–(7). We further investigated behaviors of the methods compared in more detail, on the basis of EWD. WGAN-SN failed to learn. The loss function of the critic showed divergent movement toward −∞, and the behaviors of EWD in different trials were different even though the behaviors of the critic loss were the same (Figure 2 (a) and (b)). WGAN training never failed in 5 trials, and EWD improved stably without sudden deterioration. Although training with WGAN-GP proceeded favorably in initial stages, at certain points the gradient penalty term started to increase, causing EWD to deteriorate (Figure 2 (c)). This happened in all 5 trials. Since gradient penalty is a weaker restriction than weight clipping, the critic may be more likely to cause extreme behaviors. We examined both WGAN-TS with and without weight scaling. Whereas WGAN-TS with weight scaling did not fail in training but achieved higher EWD than WGAN, WGAN-TS without weight scaling achieved lower EWD than WGAN at the cost of the stability of training (Figure 3). The proposed method was trained stably and never failed in 5 trials. As mentioned in Section 3, the critic in WGAN-TS simply regresses the optimizer of the empirical version of the Kantorovich-Rubinstein dual. Thus, there is no guarantee that the critic will satisfy the 1-Lipschitz condition. Liu et al. (2018) pointed out that it is indeed practically problematic with WGAN-TS, and proposed weight scaling to ensure that the critic satisfies the desired condition. We have empirically found, however, that weight scaling exhibited the following trade-off (Figure 3). Without weight scaling, training of WGAN-TS suddenly deteriorated in some trials because the critic came to not satisfy the Lipschitz condition. With weight scaling, on the other hand, the regression error of the critic with respect to the solution increased and the EWD became worse. The proposed method directly solves the empirical version of the optimal transport problem in the primal domain, so that it is free from such trade-off. Figure 4 shows fake-data images generated by the generators trained with WGAN, WGAN-GP, WGAN-TS, and the proposed method. Although one can identify the digits for the generated images with the proposed method most easily, these images are less sharp. Among the generated images with the other methods, one can notice several images which have almost the same appearance as real-data images, whereas in the proposed method, such fake-data images are not seen and images that seem averaged real-data images belonging to the same class often appear. This might imply that merely minimizing the Wasserstein distance between the real-data distribution and the generator distribution in the raw-image space may not necessarily produce realistic images. 5.2 GRADIENT OPTIMALITY OF GENERATOR We next observed how the generator distribution is updated in order to compare the proposed method with variants of WGAN in terms of the gradients provided. Figure 5 shows typical behavior of the generator distribution trained with the proposed method on the 8-Gaussian toy dataset. The 8- Gaussian toy dataset and experimental settings are described in Appendix B.2. One can observe that, as training progresses, the generator distribution comes closer to the real-data distribution. Figure 6 shows comparison of the behaviors of the proposed method, WGAN-GP, and WGANTS. We excluded WGAN and WGAN-SN from this comparison: WGAN tended to yield generator distributions that concentrated around a single Gaussian component, and hence training did not progress well. WGAN-SN could not correctly evaluate the Wasserstein distance as in the experiment on the MNIST dataset. One can observe in Figure 6 that directions of sample updates are diverse in the proposed method, especially in later stages of training, and that the sample update directions tend to be aligned with the optimal gradient directions. These behaviors will be helpful for the generator to learn the realdata distribution efficiently. In WGAN-GP and WGAN-TS, on the other hand, the sample update directions exhibit less diversity and less alignment with the optimal gradient directions, which would make the generator distribution difficult to spread and would slow training. One would be able to ascribe such behaviors to poor quality of the critic: Those behaviors would arise when the generator learns on the basis of unreliable gradient information provided by the critic without learning sufficiently to accurately evaluate the Wasserstein distance. If one would increase the number nc of critic iterations per generator iteration in order to train the critic better, the total computational cost of training would increase. In fact, nc = 5 is recommended in practice and has commonly been used in WGAN (Arjovsky et al., 2017) and its variants because the improvement in learning of the critic is thought to be small relative to increase in computational cost. In reality, however, 5 iterations would not be sufficient for the critic to learn, and this might be a principal reason for the critic to provide poor gradient information to the generator in the variants of WGAN. 6 CONCLUSION We have proposed a new generative model that learns by directly minimizing exact empirical Wasserstein distance between the real-data distribution and the generator distribution. Since the proposed method does not suffer from the constraints on the transport cost and the 1-Lipschitz constraint imposed on WGAN by solving the optimal transport problem in the primal domain instead of the dual domain, one can construct more flexible generative modeling. The proposed method provides the generator with better gradient information to minimize the Wasserstein distance (Section 5.2) and achieved smaller empirical Wasserstein distance with lower computational cost (Section 5.1) than any other compared variants of WGAN. In the future work, we would like to investigate the behavior of the proposed method when transport cost is defined in the feature space embedded by an appropriate inception model. ACKNOWLEDGMENTS Support of anonymous funding agencies is acknowledged. A NETWORK ARCHITECTURES A.1 CONVOLUTIONAL NEURAL NETWORKS We show in Table 2 the network architecture used in the experiment on the MNIST dataset in Section 5.1. The generator network receives a 100-dimensional noise vector generated from the standard normal distribution as an input. The noise vector is passed through the fully-connected layer and reshaped to 4 × 4 feature maps. Then they are passed through four transposed convolution layers with 5× 5 kernels, stride 2 and no biases (since performance was empirically almost the same with or without biases, we took the simpler option of not considering biases), where the resolution of feature maps is doubled and the number of them is halved except for the last layer. The critic network is basically the reverse of the generator network. A convolution layer is used instead of a transposed convolution layer in the critic. After the last convolution layer, the feature maps are flattened into a vector and passed through the fully-connected layer. We employed batch normalization (Ioffe & Szegedy, 2015) in all intermediate layers in both of the generator and the critic. Rectified linear unit (ReLU) was used as the activation function in all but the last layers. As the activation function in the last layer, the hyperbolic tangent function and the identity function were used for the generator and for the critic, respectively. A.2 FULLY-CONNECTED NEURAL NETWORKS We show in Table 3 the network architecture used in the experiment on the 8-Gaussian toy dataset in Section 5.2. The generator network architecture receives a 100-dimensional noise vector as in the experiment on the MNIST dataset. The noise vector is passed through the four fully-connected layers with biases and mapped to a two-dimensional space. The critic network is likewise the reverse of the generator network. B DETAILED EXPERIMENTAL SETTING B.1 MNIST DATASET The MNIST dataset of handwritten digits used in the experiment in Section 5.1 contains 60,000 two-dimensional images of handwritten digits with resolution 28× 28. We used default parameter settings decided by the proposers of the respective methods. We used RMSProp (Hinton et al., 2012) with learning rate 5e−5 for the critic and the generator in WGAN. The weight clipping parameter c was set to 0.01. We used Adam (Kingma & Ba, 2015) with learning rate 1e−4, β1 = 0.5, β2 = 0.999 in the other methods. λgp in WGAN-GP was set to 10. In the methods with the critic, the number nc of critic iterations per generator iteration was set to 5. B.2 8-GAUUSIAN TOY DATASET The 8-Gaussian toy dataset used in the experiment in Section 5.2 is a two-dimensional synthetic dataset, which contains real-data sampled from the Gaussian mixture distribution with 8 centers equally distant from the origin and unit variance as the real-data distribution. The centers of the 8 Gaussian component distributions are (±10, 0), (0,±10), and (±10/ √ 2,±10/ √ 2). 30, 000 samples were generated in advance before training and were used as the real-data samples. In all methods, the batch size was set to 64 and the maximum number of iterations in training the generator was set to 1,000. WGAN and WGAN-SN could not learn well with this dataset, even though we considered several parameter sets. We used Adam with learning rate 1e−3, β1 = 0.5, β2 = 0.999 for WGAN-GP, WGAN-TS and the proposed method. λgp in WGAN-GP was set to 10. In the methods with the critic, the number nc of critic iterations was set to 5. B.3 EXECUTION ENVIRONMENT All the numerical experiments in this paper were executed on a computer with an Intel Core i76850K CPU (3.60 GHz, 6 cores) and 32 GB RAM, and with four GeForce GTX 1080 graphics cards installed. Linear sum assignment problems were solved using the Hungarian algorithm, which has time complexity of O(N3). Codes used in the experiments were written in tensorflow 1.10.1 on python 3.6.0, with eager execution enabled. C EVALUATION WITH FID AND IS ON MNIST DATASET We show the result of evaluation of the experimented methods with FID and IS in Table 4. Both FID and IS are commonly used to evaluate GANs. FID calculates the distance between the set of real-data points and that of fake-data points. The smaller the distance is, the better the fake-data points are judged. Assuming that the vector obtained from a fake- or real-data point through the inception model follows a multivariate Gaussian distribution, FID is defined by the following equation: FID2 = ‖µ1 − µ2‖22 + tr ( Σ1 +Σ2 − 2(Σ1Σ2) 1 2 ) , (10) where (µi,Σi) is the mean vector and the covariance matrix for dataset i, evaluated in the feature space embedded with inception scores. It is nothing but the square of the Wasserstein-2 distance between two multivariate Gaussian distributions with parameters (µ1,Σ1) and (µ2,Σ2), respectively. IS is a metric to evaluate only the set of fake-data points. Let xi be a data point, y be the label of xi in the data identification task for which the inception model was trained, p(y|xi) be the probability of label y obtained by inputting xi to the inception model. Letting X be the set of all data points used for calculating the score, the marginal probability of label y is p(y) = 1|X| ∑ xi∈X p(y|xi). IS is defined by the following equation: IS = exp ( 1 |X| ∑ xi∈X KL (p(y|xi)||p(y)) ) , (11) where KL is Kullback–Leibler divergence. IS is designed to be high as the data points are easy to identify by the inception model and variation of labels identified from the data points is abundant. In WGAN-GP, WGAN-SN and WGAN-TS*, we observed that training suddenly deteriorated in some trials. We thus used early stopping on the basis of EWD, and the results of these methods shown in Table 4 are with early stopping. The proposed method marked the worst in FID and the best in IS among all the methods compared. Certainly, the fake-data generated by the proposed method are non-sharp and do not resemble realdata points, but it seems that it is easy to distinguish them and they have diversity as digit images. If one wishes to produce higher FID results using the proposed method, transport cost should be considered in the desired space corresponding to FID.
1. What is the main contribution of the paper, and how does it differ from previous works on Wasserstein GANs? 2. How does the proposed method optimize the exact empirical Wasserstein distance, and what are the implications of this optimization? 3. What are the limitations of using optimal transport computed on batches rather than the whole dataset? 4. How does the paper's experimental validation fall short, and what additional experiments would be necessary to provide sufficient validation? 5. Are there any typos or errors in the paper that need to be addressed?
Review
Review The paper ‘Generative model based on minimizing exact empirical Wasserstein distance' proposes a variant of Wasserstein GAN based on a primal version of the Wasserstein loss rather than the relying on the classical Kantorovich-Rubinstein duality as first proposed by Arjovsky in the GAN context. Comparisons with other variants of Wasserstein GAN is proposed on MNIST. I see little novelty in the paper. The derivation of the primal version of the problem is already given in Cuturi, M., & Doucet, A. (2014, January). Fast computation of Wasserstein barycenters. In ICML (pp. 685-693). Using optimal transport computed on batches rather the on the whole dataset is already used in (among others) Genevay, A., Peyré, G., & Cuturi, M. (2017). Learning generative models with sinkhorn divergences. AISTATS Damodaran, B. B., Kellenberger, B., Flamary, R., Tuia, D., & Courty, N. (2018). DeepJDOT: Deep Joint distribution optimal transport for unsupervised domain adaptation. ECCV Also, the claim that the exact empirical Wasserstein distance is optimized is not true. The gradients, evaluated on batches, are biased. Unfortunately, the Wasserstein distance does not enjoy similar U-statistics as MMD. It is very well described in the paper (Section 3): https://openreview.net/pdf?id=S1m6h21Cb Computing the gradients of Wasserstein on batches might be seen a kind of regularization, but it remains to be proved and discussed. Finally, the experimental validation appears insufficient to me (as only MNIST or toy datasets are considered). Typos: Eq (1) and (2): when taken over the set of all Lipschitz-1 functions, the max should be a sup
ICLR
Title On Intriguing Layer-Wise Properties of Robust Overfitting in Adversarial Training Abstract Adversarial training has proven to be one of the most effective methods to defend against adversarial attacks. Nevertheless, robust overfitting is a common obstacle in adversarial training of deep networks. There is a common belief that the features learned by different network layers have different properties, however, existing works generally investigate robust overfitting by considering a DNN as a single unit and hence the impact of different network layers on robust overfitting remains unclear. In this work, we divide a DNN into a series of layers and investigate the effect of different network layers on robust overfitting. We find that different layers exhibit distinct properties towards robust overfitting, and in particular, robust overfitting is mostly related to the optimization of latter parts of the network. Based upon the observed effect, we propose a robust adversarial training (RAT) prototype: in a minibatch, we optimize the front parts of the network as usual, and adopt additional measures to regularize the optimization of the latter parts. Based on the prototype, we designed two realizations of RAT, and extensive experiments demonstrate that RAT can eliminate robust overfitting and boost adversarial robustness over the standard adversarial training. 1 INTRODUCTION Deep neural networks (DNNs) have been widely applied in multiple fields, such as computer vision (He et al., 2016) and natural language processing (Devlin et al., 2018). Despite its achieved success, recent studies show that DNNs are vulnerable to adversarial examples. Well-constructed perturbations on the input images that are imperceptible to human’s eyes can make DNNs lead to a completely different prediction (Szegedy et al., 2013). The security concern due to this weakness of DNNs has led to various works in the study of improving DNNs robustness against adversarial examples. Across existing defense techniques thus far, Adversarial Training (AT) (Goodfellow et al., 2014; Madry et al., 2017), which optimizes DNNs with adversarially perturbed data instead of natural data, is the most effective approach (Athalye et al., 2018). However, it has been shown that networks trained by AT technique do not generalize well (Rice et al., 2020). After a certain point in AT, immediately after the first learning rate decay, the robust test accuracy continues to decrease with further training. Typical regularization practices to mitigate overfitting such as l1 & l2 regularization, weight decay, data augmentation, etc. are reported to be as inefficient compared to simple early stopping (Rice et al., 2020). Many studies have attempted to improve the robust generalization gap in AT, and most have generally investigated robust overfitting by considering DNNs as whole. However, DNNs trained on natural images exhibit a common phenomenon: features obtained in the first layers appear to be general and applicable widespread, while features computed by the last layers are dependent on a particular dataset and task (Yosinski et al., 2014). Such behavior of DNNs sparks a question: Do different layers contribute differently to robust overfitting? Intuitively, robust overfitting acts as an unexpected optimization state in adversarial training, and its occurrence may be closely related to the entire network. Nevertheless, the unique effect of different network layers on robust overfitting is still unclear. Without a detailed understanding of the layer-wise mechanism of robust overfitting, it is difficult to completely demystify the exact underlying cause of the robust overfitting phenomenon. In this paper, we provide the first layer-wise diagnosis of robust overfitting. Specifically, instead of considering the network as a whole, we treat the network as a composition of layers and sys- tematically investigate the impact of robust overfitting phenomenon on different layers. To do this, we first fix the parameters for the selected layers, leaving them unoptimized during AT, and then normally optimize other layer parameters. We discovered that robust overfitting is always mitigated in the case where the latter layers are left unoptimized, and applying the same effect to other layers is futile for robust overfitting, suggesting a strong connection between the optimization of the latter layers and the overfitting phenomenon. Based upon the observed effect, we propose a robust adversarial training (RAT) prototype to relieve the issue of robust overfitting. Specifically, RAT works in each mini-batch: it optimizes the front layers as usual, and for the latter layers, it implements additional measures on these parameters to regularize their optimization. It is a general adversarial training prototype, where the front and latter network layers can be separated by some simple test experiments, and the implementation of additional measures to regularize network layer optimization can be versatile. For instance, we designed two representative methods for the realizations of RAT: RATLR and RATWP. They adopt different strategies to hinder weight update, e.g., enlarging the learning rate and weight perturbation, respectively. Extensive experiments show that the proposed RAT prototype effectively eliminates robust overfitting. The contributions of this work are summarized as follows: • We provide the first diagnosis of robust overfitting on different network layers, and find that there is a strong connection between the optimization of the latter layers and the robust overfitting phenomenon. • Based on the observed properties of robust overfitting, we propose the RAT prototype, which adopts additional measures to regularize the optimization of the latter layers and is tailored to prevent robust overfitting. • We design two different realizations of RAT, with extensive experiments on a number of standard benchmarks, verifying its effectiveness. 2 RELATED WORK 2.1 ADVERSARIAL TRAINING Since the discovery of adversarial examples, there have been many defensive methods attempted to improve the DNN’s robustness against such adversaries, such as adversarial training (Madry et al., 2017), defense distillation (Papernot et al., 2016), input denoising (Liao et al., 2018), gradient regularization (Tramèr et al., 2018). So far, adversarial training (Madry et al., 2017) has proven to be the most effective method. Adversarial training comprises two optimization problems: the inner maximization and outer minimization. The first one constructs adversarial examples by maximizing the loss and the second updates the weight by minimizing the loss on adversarial data. Here, fw is the DNN classifier with weight w, and ℓ(·) is the loss function. d(., .) specify the distance between original input data xi and adversarial data x′i, which is usually an lp-norm ball such as the l2 and l∞-norm balls and ϵ is the maximum perturbation allowed. ℓAT(w) = min w ∑ i max d(xi,x′i)≤ϵ ℓ(fw(x ′ i), yi), (1) 2.2 ROBUST GENERALIZATION An interesting characteristic of deep neutral networks (DNNs) is their ability to generalize well in practice (Belkin et al., 2019). For the standard training setting, it is observed that test loss continues to decrease for long periods of training (Nakkiran et al., 2020), thus the common practice is to train DNNs for as long as possible. However, this is no longer the case in adversarial training, which exhibits overfitting behavior the longer the training process (Rice et al., 2020). This phenomenon has been referred to as ”robust overfitting” and has shown strong resistance to standard regularization techniques such as l1, l2 regularization and data augmentation methods. (Rice et al., 2020) Schmidt et al. (2018) theorizes that robust generalization have a large sample complexity, which requires substantially larger dataset. Many subsequent works have empirically validated such claim, such as AT with semi-supervised learning (Carmon et al., 2019; Zhai et al., 2019), robust local feature (Song et al., 2020) and data interpolation (Lee et al., 2020; Chen et al., 2021). (Chen et al., 2020) proposes to combine smoothing the logits via self-training and smoothing the weight via stochastic weight averaging to mitigate robust overfitting. Wu et al. (2020) emphasizes the connection of weight loss landscape and robust generalization gap, and suggests injecting the adversarial perturbations into both inputs and weights during AT to regularize the flatness of weight loss landscape. The intriguing property of robust overfitting has motivated great amount of study and investigation, but current works typically approach the phenomenon considering a DNN as a whole. In contrast, our work treats a DNN as a series of layers and reveals a strong connection between robust overfitting and the optimization of the latter layers, providing a novel perspective into better understanding the phenomenon. 3 INTRIGUING PROPERTIES OF ROBUST OVERFITTING In this section, we first investigate the layer-wise properties of robust overfitting by fixing model parameters in AT (Section 3.1). Based on our observations, we further propose a robust adversarial training (RAT) prototype to eliminate robust overfitting (Section 3.2). Finally, we design two different realizations for RAT to verify the effectiveness of the proposed method (Section 3.3). 3.1 LAYER-WISE ANALYSIS OF ROBUST OVERFITTING Current works usually study the robust overfitting phenomenon considering the network as a single unit. However, features computed by different layers exhibit different properties, such as first-layer features are general and last-layer features are specific (Yosinski et al., 2014). We hypothesize that different network layers have different effects on robust overfitting. To empirically verify the above hypothesis, we deliberately fix the parameters of the selected network layers, leaving them unoptimized during AT and observe the behavior of robust overfitting accordingly. Specifically, we considered ResNet-18 architecture as a composition of 4 main layers, corresponding to 4 Residual blocks. We then train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their parameter fixed. The robust test performance in figure 1(a) shows a consistent pattern. Robust overfitting is mitigated whenever we fix the parameters for layer 4 during AT, while any settings that do not fix the parameters for layer 4 result in a more severe gap between the best accuracy and the accuracy at the last epoch. For example, for settings such as AT-fix-param-[4], AT-fix-param-[1,4], AT-fix-param-[2,4] and AT-fix-param-[3,4], robust overfitting is significantly reduced. On the other hand, for settings such AT-fix-param-[1,2], AT-fix-param-[1,3] and AT-fix-param-[2,3], when we fix the parameters of various set of layers but allow for the optimization of layer 4, robust overfitting still widely exists. For extreme case like AT-fix-param-[1,2,3], where we fix the first three front layers and only allow for the optimization of that last layer 4, the gap between the best accuracy and the last accuracy is still obvious. This clearly indicates that the optimization of the latter layers present a strong correlation to the robust overfitting phenomenon. Note that this relationship can be observed across a variety of datasets, model architectures, and threat models (shown in Appendix A), indicating that it is a general property in adversarial training. In many of these settings, robust overfitting is mitigated at the cost of robust accuracy. For example in AT-fix-param-[3,4], if we leave both layer 3 & 4 unoptimized, robust overfitting will practically disappear, but the peak performance is much worse compared to standard AT. When carefully examining the training performance in these settings shown in figure 1(b), we generally observe that the network capacity to fit adversarial data is strong when we fix the parameters for the front layers, but it gradually gets weaker as we try to fix the latter layers. For instance, AT-fix-param-[1] has the highest train robust accuracy, then comes AT-fix-param[2], AT-fix-param[3] and AT-fix-param[4]; AT-fix-param[1,2,3] has higher training accuracy than AT-fix-param[2,3,4]. This suggests fixing the latter layers’ parameters can regularize the network better compared to fixing the front layers’s parameters. In the subsequent sections, we will introduce methods that specifically regularize the optimization of the latter layers, so as to mitigate robust overfitting without tradeoffs in robustness. We will compare the impact on robust overfitting when applied such methods on the front layers vs the latter layers, further highlighting the importance of the latter layers in relation to robust overfitting. 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[1] AT_fix_param_[2] AT_fix_param_[3] AT_fix_param_[4] 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[1, 2] AT_fix_param_[1, 3] AT_fix_param_[1, 4] 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[2, 3] AT_fix_param_[2, 4] AT_fix_param_[3, 4] 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[1, 2, 3] AT_fix_param_[1, 2, 4] AT_fix_param_[1, 3, 4] AT_fix_param_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) Robust Test Performance 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standard AT_fix_param_[1] AT_fix_param_[2] AT_fix_param_[3] AT_fix_param_[4] 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standard AT_fix_param_[1, 2] AT_fix_param_[1, 3] AT_fix_param_[1, 4] 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standardAT_fix_param_[2, 3] AT_fix_param_[2, 4] AT_fix_param_[3, 4] 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standardAT_fix_param_[1, 2, 3] AT_fix_param_[1, 2, 4] AT_fix_param_[1, 3, 4] AT_fix_param_[2, 3, 4] Epochs Tr ai n Ro bu st A cc ur ac y (b) Robust Train Performance Figure 1: The robust train/test performance of adversarial training with different sets of network layers fixed. AT-fix-param[1,2] corresponds to fixing the parameters of layers 1 & 2 during AT 3.2 A PROTOTYPE OF RAT As witnessed in Section 3.1, the optimization of AT in the latter layers is highly correlated to the existence of robust overfitting. To address this, we propose to train the network on adversarial data with some restrictions put onto the optimization of the latter layers, dubbed as Robust Adversarial Training (RAT). RAT adopts additional measures to regularize the optimization of the latter layers, and ensures that robust overfitting will not occur. The RAT prototype is given in Algorithm 1. It runs as follows. We start with a base adversarial training algorithm A. In Line 1-3, The inner maximization pass aims to maximize the loss via creating adversarial examples, and then the outer minimization pass updates the weight by minimizing the loss on adversarial data. Line 4 initiates a loop through all parts of the weight w from the front layers to the latter layers. Line 5-9 then manipulate different parts of the weight based on its layer conditions. If the parts of the weight belong to the front layers (Cfront), they will be kept intact. Otherwise, a weight update scheme S is put onto the parts of the weight corresponding to the latter layers (Clatter). The role of S is to apply some regularization on the latter layers’ weight. Finally, the optimizer O updates the model fw in Line 11. Note that RAT is a general prototype where layer conditions Cfront, Clatter and weight adjustment strategy S can be versatile. For example, based on the observations in Section 3.1, we treat the Res-Net architecture as a composition of 4 main layers, corresponding to 4 residual blocks, where Cfront indicates layer 1 & 2 and Clatter indicates layer 3 & 4. S can also represent various strategies that serves to regularize the optimization of the latter layers. In the section below, we will propose two different strategies S in the implementations of RAT to demonstrate RAT’s effectiveness. 3.3 TWO REALIZATIONS OF RAT In this section, we will propose two different methods to put certain restrictions on the optimization of selected parts of the network, and then investigate the robust overfitting behavior upon applying such method to the front layers vs the latter layers. These methods showcase a clear relation between the optimization of the latter layers and robust generalization gap. RAT through enlarging learning rate. In standard AT, the sudden increases in robust test performance appears to be closely related to the drops in the scheduled learning rate decay. We hypothesize Algorithm 1 RAT-prototype (in a mini-batch). Require: base adversarial training algorithm A, optimizer O, network fw, model parameter w = {w1, w2, ..., wn}, training dataD = {(xi, yi)}, mini-batch B, front and latter layer conditions Cfront and Clatter for fw, gradient adjustment strategy S 1: Sample a mini-batch B = {(xi, yi)} from D 2: B′ = A.inner maximization(fw,B) 3: ∇w ← A.outer minimization(fw, ℓB′) 4: for i = 1, ..., n do 5: if Cfront(wi) then 6: ∇wi ← ∇wi 7: else if Clatter(wi) then 8: ∇wi ← S(fw,B′,∇wi) # adjust gradient 9: end if 10: end for 11: O.step(∇w) that training AT without learning rate decays is sub-optimal, which can regularize the learning process of adversarial training. Comparison of the train/test performance between standard AT and AT without learning rate decay (AT-fix-lr-[1,2,3,4]) are shown in figure 2(b). Training performance of standard AT accelerates quickly right after the first learning rate drop, expanding the generalization gap with further training, whereas for AT without learning rate decay, training performance increases slowly and maintain a stable generalization gap. This suggests that AT optimized without learning rate decay has less capacity to fit adversarial data, and thus provides the regularization needed to relieve robust overfitting. As our previous analysis suggests that the optimization of the latter layers is more important in mitigating robust overfitting, we propose using a fixed learning rate = 0.1 for optimizing the latter parts of the network while applying the piecewise decay learning rate for the former parts to close the robust generalization gap. We refer to this approach as a realization of RAT, namely RATLR. Compared to standard AT, RATLR essentially enlarge the weight update step ∇wi along the latter parts of the gradients by 10 at the first learning rate decay and 100 at the second decay. ∇wi = η∇wi , (2) where η is the amplification coefficient. To demonstrate the effectiveness of RATLR, we train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their learning rate fixed to 0.1 while maintaining the piece-wise learning rate schedule for other layers. Figure 2(a) validate our proposition. Robust overfitting is relieved for all settings that target layers that include layer 4 (AT-fix-lr-[4], AT-fix-lr-[1,4], AT-fix-lr-[2,4], etc.) while any settings that fix the learning rate of layers that exclude layer 4 do not reduce robust overfitting. Furthermore, all settings that fix the learning rate for both layer 3 & 4, including AT-fix-lr-[3,4], AT-fix-lr-[1,3,4], AT-fix-lr-[2,3,4] AT-fix-lr-[1,2,3,4] completely eliminate robust overfitting. The observations verify that regularizing the optimization of the latter layers by optimizing those layers without learning rate decays can prevent robust overfitting from occurring. An important observation is that RATLR (AT-fix-lr-[3,4]) can both overcome robust overfitting and achieve better robust test performance compared to the network using a fixed learning rate for all layers (AT-fix-lr-[1,2,3,4]). Examining the training performance between these two settings in figure 2(c), we find that RATLR exhibits a rapid rise in both robust and standard training performance immediately after the first learning rate decay similar to standard AT. The training performance of RATLR is able to benefit from the learning rate decay occurring at layer 1 & 2, making a notable improvement compared to AT-fix-lr-[1,2,3,4]. By training layers 3 & 4 without learning rate decays, we specifically put some restrictions on the optimization of only the latter parts of the network heavily responsible for robust overfitting, which can relieve robust overfitting without sacrificing too much performance. The experiment results provide another indication that the latter layers have stronger connections to robust overfitting than the front layers do, and regularizing the optimization of the latter layers from the perspective of learning rate can effectively solve robust overfitting. 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) Robust test performance of all settings RAT through adversarial weight pertubation. We continue to study the impact of different network layers to robust overfitting phenomenon from the perspective of adversarial weight perturbation (AWP). Wu et al. (2020) proposes AWP as a method to explicitly flatten weight loss landscape, by introducing adversarial perturbations into both inputs and weights during AT: min w max v∈V ∑ i max d(xi,x′i)≤ϵ ℓ(fw+v(x ′ i), yi), (3) where v is the adversarial weight perturbation generated by maximizing the classification loss: v = ∇w ∑ i ℓi. (4) As AWP keeps injecting the worst-case perturbations on weight during training, it could also be viewed as a means to regularize the optimization of AT. In fact, the training of AWP exhibits a negative robust generalization gap, where robust training accuracy is in short of robust testing accuracy by a large margin, shown in figure 3(c). This indicates AWP put significant restrictions to the optimization of AT, introducing huge trade-offs to training performance. As our previous analysis suggests a strong correlation between robust overfitting and the optimization of the latter layers, we argue that the capacity to mitigate robust overfitting from AWP is mostly thanks to the perturbations occurring at latter layers’ weight. As such, we propose to specifically apply AWP to the latter half of the network, and refer to this method as RATWP. In essence, RATWP compute the adversarial weight perturbation vi under the layer condition Clatter(wi), so that only the parts of the weight along the latter half of the network are perturbed. min w=[w1,...,wi,...,wn] max v=[0,...,vi,...0]∈V ∑ i max d(xi,x′i)≤ϵ ℓ(fw+v(x ′ i), yi), (5) vi = ∇wi ∑ i ℓi. (6) To prove the effectiveness of RATWP , we train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their weight locally 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) Robust test performance of all settings perturbed using AWP. As seen from figure 3(a), There are only 3 settings that can overcome robust overfitting, namely AT-AWP-[3,4], AT-AWP-[1,3,4] and AT-AWP-[2,3,4]. These settings share one key similarity: both layer 3&4 have their weight adversarially perturbed during AT. Simply applying AWP to any set of layers that exclude layers 3&4 is not sufficient to eliminate robust overfitting. This shows that AWP is effective in solving robust overfitting only when applied to both layer 3 and layer 4. Even when AWP is applied to the first 3 former layers out of 4 layers (AT-awp[1,2,3]), robust overfitting still widely exists. In another word, it is essential for the adversarial weight perturbations to occur at the latter part of the network in order to mitigate robust overfitting. To examine this phenomenon in detail, we compare the training performance of AWP applied to front layers (represented by AT-AWP-[1,2,3]) vs AWP applied to latter layers (represented by ATAWP-[3,4]), shown in figure 3(b). AWP applied in the front layers have a much better training performance than AWP applied in the latter layers. Furthermore, AWP applied to front layers reveals a positive robust generalization gap (training accuracy > testing accuracy) shortly after the first drop in learning rate, which continues to widen with further training. Conversely, AWP applied in the latter layers exhibits a negative robust generalization gap throughout most of the training, only converging to 0 after the second drop in learning rate. These differences demonstrate that worst-case perturbations, when injected into the latter layers’ weights, have a more powerful impact in regularizing the optimization of AT. Consistent with our previous findings, AWP applied to the latter layers can be considered as an approach to regularize the optimization of AT in those layers, which successfully mitigates robust overfitting. This finding supports our analysis thus far, further demonstrating that regularizing the optimization of the latter layers is key to improving the robust generalization. 4 EXPERIMENT In this section, we conduct extensive experiments to verify the effectiveness of RATLR and RATWP. Details of the experiment settings and performance evaluation are introduced below. 4.1 EXPERIMENTAL SETUP We conduct extensive experiments on two realizations of RAT across three benchmark datasets (CIFAR10 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011) and CIFAR100 (Krizhevsky et al., 2009)) and two threat models (L∞ and L2). We use PreAct ResNet-18 He et al. (2016) and Wide ResNet-34-10 following the same hyperparameter settings for AT in Rice et al. (2020): for L∞ threat model, ϵ = 8/255, step size is 1/255 for SVHN, and 2/255 for CIFAR-10 and CIFAR-100; for L2 threat model, ϵ = 128/255, step size is 15/255 for all datasets. For training, all models are trained under 10-step PGD (PGD-10) attack for 200 epochs using SGD with momentum 0.9, weight decay 5 × 10−4, and a piecewise learning rate schedule with an initial learning rate of 0.1. RAT models are decomposed into a series of 4 main layers, corresponding to 4 residual blocks of the ResNet architecture. For RATLR, learning rate for layer 3&4 are set to a fixed value of 0.1. For RATWP leveraging AWP in layer 3&4, γ = 1 × 10−2. For testing, the robustness accuracy is evaluated under two different adversarial attacks, including 20-step PGD (PGD-20) and Auto Attack (AA) Croce & Hein (2020b). Auto Attack is considered the most reliable robustness evaluation to date, which is an ensemble of complementary attacks, consisting of three white-box attacks (APGD-CE (Croce & Hein, 2020b), APGD-DLR (Croce & Hein, 2020b), and FAB (Croce & Hein, 2020a)) and a black-box attack (Square Attack (Andriushchenko et al., 2020)) 4.2 PERFORMANCE EVALUATION In this section, we present the experimental results of RATLR and RATWP across three benchmark datasets CIFAR10 Results. The evaluation results on CIFAR10 dataset are summarized in Table 1, where “Best” is the highest test robustness achieved during training; “Last” is the test robustness at the last epoch checkpoint; “Diff” denotes the robust accuracy gap between the “Best” & “Last”. It is observed that RATWP generally achieves the best robust performance compared to RATLR & standard AT. Regardless, both RATLR and RATWP tighten the robustness gaps by a significant margin, indicating they can effectively suppress robust overfitting. CIFAR100 Results. We also show the results on CIFAR100 dataset in Table 2. We observe similar performance like CIFAR10, where both RATLR and RATWP is able to significantly reduce the robustness gaps. For robustness improvement, RATWP stands out to be the leading method. The results further verify the effectiveness of the proposed approach. SVHN Results. Finally, we summarize the results on the SVHN dataset in Table 3, where robustness gap are also narrowed down to a small margin by RATWP. SVHN dataset is a special case where RATLR strategy does not improve robust overfitting. Unlike CIFAR10 and CIFAR100, learning rate decay in SVHN’s training does not have much connection to the sudden increases in robust test performance or the prevalence of robust overfitting, and hence makes RATLR ineffective. Other than this, The improvement in robust generalization gaps can be witnessed in all cases, demonstrating the proposed approachs are generic and can be applied widely. 5 CONCLUSION In this paper, we investigate the effects of different network layers on robust overfitting and identify that robust overfitting is mainly driven by the optimization occurred at the latter layers. Following this, we propose a robust adversarial training (RAT) prototype to specifically hinder the optimization of the latter layers in the process of training adversarial network. The approach prevents the model from overfitting the latter parts of the network, which effectively eliminate robust overfitting of the network as a whole. We then further demonstrate two implementations of RAT: one locally uses a fixed learning rate for the latter layers and the other utilize adversarial weight perturbation for the latter layers. Extensive experiments show the effectiveness of both approaches, suggesting RAT is generic and can be applied across different network architectures, threat models and benchmark datasets to solve robust overfitting. A MORE EVIDENCES FOR THE LAYER-WISE PROPERTIES OF ROBUST OVERFITTING In this section, we provide more empirical experiments to showcase the layer-wise properties of robust overfitting across different datasets, model architectures and threat models. Specifically, we use two strategies mentioned in Section 3.3 to put restriction on the optimization of different network layers. We can always observe that there is no robust overfitting when we regularize the optimization of layers 3 and 4 (the latter layers), while robust overfitting is prevalent for other settings. These evidences further highlight the strong relation between robust overfitting and the optimization of the latter layers. A.1 EVIDENCES ACROSS DATASETS We show that the layer-wise properties of robust overfitting is universal across datasets on CIFAR100 and SVHN. We adversarially train PreAct ResNet-18 under l∞ threat model on different datasets with the same settings as Section 3.3. The results are shown in Figure 4 and 5. Note that for SVHN, regularization strategy utilizing a fixed learning rate (RATLR) for does not improve robust overfitting (Figure 4). Unlike CIFAR10 and CIFAR100, SVHN’s training overfits way before the first learning rate decay. Also, learning rate decay in SVHN’s training does not have any relation to the sudden increases in robust test performance or the appearance of robust overfitting. Hence, SVHN dataset is a special case where RATLR does not apply. For all other cases, robust overfitting is effectively eliminated by regularizing the optimization of layers 3 and 4. A.2 EVIDENCES ACROSS THREAT MODELS We further demonstrate that the generality of layer-wise properties of robust overfitting by conducting experiments under l2 threat model across datasets. The settings are the same as Section 3.3. The results are shown in Figure 6 and 7. Under l2 threat model, except for SVHN dataset where regularization strategy utilizing a fixed learning rate (RATLR) does not apply, robust overfitting is effectively eliminated by regularizing the optimization of layers 3 and 4. 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) CIFAR10 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (b) CIFAR100 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (c) SVHN Figure 6: Robust test performance of adversarial training using a fixed learning rate for different sets of network layers, across datasets (CIFAR-10, CIFAR-100 and SVHN) under l2 threat 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) CIFAR10 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (b) CIFAR100 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (c) SVHN Figure 7: Robust test performance of adversarial training applying AWP for different sets of network layers, across datasets (CIFAR10, CIFAR-100 and SVHN) under l2 threat
1. What are the main contributions and findings of the paper regarding the layerwise properties of robustness and overfitting? 2. What are the strengths of the paper in terms of its writing quality, analysis, and performance improvement? 3. What are the weaknesses of the paper regarding comparisons with state-of-the-art methods? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What is the purpose of the question asked by the reviewer in Section 3.1, and how does it relate to the paper's overall contributions?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This work studies the intriguing layerwise properties of robustness overfitting and reveals that adversarial overfitting typically happens in the deep layers of networks. Based on the observations, two regularizations are utilized for the latter layers to tackle the overfitting. Extensive experiments with different datasets, attack methods, and model architectures verify the proposed method. Strengths And Weaknesses Pros: This paper is well-written and easy to follow. An in-depth analysis is performed to support the hypothesis. Significant performance improvement in many settings. Cons: It would be better if the authors could offer some comparison with SOTA methods. Question: In Section 3.1, for AT-fix-param-[XXX], does fixing mean the parameters are fixed as random initialization? And do you still update the fc layer? Clarity, Quality, Novelty And Reproducibility Clarity: Good. Quality: Good. Novelty: Good. Reproducibility: Fair. (Releasing code can further improve reproducibility)
ICLR
Title On Intriguing Layer-Wise Properties of Robust Overfitting in Adversarial Training Abstract Adversarial training has proven to be one of the most effective methods to defend against adversarial attacks. Nevertheless, robust overfitting is a common obstacle in adversarial training of deep networks. There is a common belief that the features learned by different network layers have different properties, however, existing works generally investigate robust overfitting by considering a DNN as a single unit and hence the impact of different network layers on robust overfitting remains unclear. In this work, we divide a DNN into a series of layers and investigate the effect of different network layers on robust overfitting. We find that different layers exhibit distinct properties towards robust overfitting, and in particular, robust overfitting is mostly related to the optimization of latter parts of the network. Based upon the observed effect, we propose a robust adversarial training (RAT) prototype: in a minibatch, we optimize the front parts of the network as usual, and adopt additional measures to regularize the optimization of the latter parts. Based on the prototype, we designed two realizations of RAT, and extensive experiments demonstrate that RAT can eliminate robust overfitting and boost adversarial robustness over the standard adversarial training. 1 INTRODUCTION Deep neural networks (DNNs) have been widely applied in multiple fields, such as computer vision (He et al., 2016) and natural language processing (Devlin et al., 2018). Despite its achieved success, recent studies show that DNNs are vulnerable to adversarial examples. Well-constructed perturbations on the input images that are imperceptible to human’s eyes can make DNNs lead to a completely different prediction (Szegedy et al., 2013). The security concern due to this weakness of DNNs has led to various works in the study of improving DNNs robustness against adversarial examples. Across existing defense techniques thus far, Adversarial Training (AT) (Goodfellow et al., 2014; Madry et al., 2017), which optimizes DNNs with adversarially perturbed data instead of natural data, is the most effective approach (Athalye et al., 2018). However, it has been shown that networks trained by AT technique do not generalize well (Rice et al., 2020). After a certain point in AT, immediately after the first learning rate decay, the robust test accuracy continues to decrease with further training. Typical regularization practices to mitigate overfitting such as l1 & l2 regularization, weight decay, data augmentation, etc. are reported to be as inefficient compared to simple early stopping (Rice et al., 2020). Many studies have attempted to improve the robust generalization gap in AT, and most have generally investigated robust overfitting by considering DNNs as whole. However, DNNs trained on natural images exhibit a common phenomenon: features obtained in the first layers appear to be general and applicable widespread, while features computed by the last layers are dependent on a particular dataset and task (Yosinski et al., 2014). Such behavior of DNNs sparks a question: Do different layers contribute differently to robust overfitting? Intuitively, robust overfitting acts as an unexpected optimization state in adversarial training, and its occurrence may be closely related to the entire network. Nevertheless, the unique effect of different network layers on robust overfitting is still unclear. Without a detailed understanding of the layer-wise mechanism of robust overfitting, it is difficult to completely demystify the exact underlying cause of the robust overfitting phenomenon. In this paper, we provide the first layer-wise diagnosis of robust overfitting. Specifically, instead of considering the network as a whole, we treat the network as a composition of layers and sys- tematically investigate the impact of robust overfitting phenomenon on different layers. To do this, we first fix the parameters for the selected layers, leaving them unoptimized during AT, and then normally optimize other layer parameters. We discovered that robust overfitting is always mitigated in the case where the latter layers are left unoptimized, and applying the same effect to other layers is futile for robust overfitting, suggesting a strong connection between the optimization of the latter layers and the overfitting phenomenon. Based upon the observed effect, we propose a robust adversarial training (RAT) prototype to relieve the issue of robust overfitting. Specifically, RAT works in each mini-batch: it optimizes the front layers as usual, and for the latter layers, it implements additional measures on these parameters to regularize their optimization. It is a general adversarial training prototype, where the front and latter network layers can be separated by some simple test experiments, and the implementation of additional measures to regularize network layer optimization can be versatile. For instance, we designed two representative methods for the realizations of RAT: RATLR and RATWP. They adopt different strategies to hinder weight update, e.g., enlarging the learning rate and weight perturbation, respectively. Extensive experiments show that the proposed RAT prototype effectively eliminates robust overfitting. The contributions of this work are summarized as follows: • We provide the first diagnosis of robust overfitting on different network layers, and find that there is a strong connection between the optimization of the latter layers and the robust overfitting phenomenon. • Based on the observed properties of robust overfitting, we propose the RAT prototype, which adopts additional measures to regularize the optimization of the latter layers and is tailored to prevent robust overfitting. • We design two different realizations of RAT, with extensive experiments on a number of standard benchmarks, verifying its effectiveness. 2 RELATED WORK 2.1 ADVERSARIAL TRAINING Since the discovery of adversarial examples, there have been many defensive methods attempted to improve the DNN’s robustness against such adversaries, such as adversarial training (Madry et al., 2017), defense distillation (Papernot et al., 2016), input denoising (Liao et al., 2018), gradient regularization (Tramèr et al., 2018). So far, adversarial training (Madry et al., 2017) has proven to be the most effective method. Adversarial training comprises two optimization problems: the inner maximization and outer minimization. The first one constructs adversarial examples by maximizing the loss and the second updates the weight by minimizing the loss on adversarial data. Here, fw is the DNN classifier with weight w, and ℓ(·) is the loss function. d(., .) specify the distance between original input data xi and adversarial data x′i, which is usually an lp-norm ball such as the l2 and l∞-norm balls and ϵ is the maximum perturbation allowed. ℓAT(w) = min w ∑ i max d(xi,x′i)≤ϵ ℓ(fw(x ′ i), yi), (1) 2.2 ROBUST GENERALIZATION An interesting characteristic of deep neutral networks (DNNs) is their ability to generalize well in practice (Belkin et al., 2019). For the standard training setting, it is observed that test loss continues to decrease for long periods of training (Nakkiran et al., 2020), thus the common practice is to train DNNs for as long as possible. However, this is no longer the case in adversarial training, which exhibits overfitting behavior the longer the training process (Rice et al., 2020). This phenomenon has been referred to as ”robust overfitting” and has shown strong resistance to standard regularization techniques such as l1, l2 regularization and data augmentation methods. (Rice et al., 2020) Schmidt et al. (2018) theorizes that robust generalization have a large sample complexity, which requires substantially larger dataset. Many subsequent works have empirically validated such claim, such as AT with semi-supervised learning (Carmon et al., 2019; Zhai et al., 2019), robust local feature (Song et al., 2020) and data interpolation (Lee et al., 2020; Chen et al., 2021). (Chen et al., 2020) proposes to combine smoothing the logits via self-training and smoothing the weight via stochastic weight averaging to mitigate robust overfitting. Wu et al. (2020) emphasizes the connection of weight loss landscape and robust generalization gap, and suggests injecting the adversarial perturbations into both inputs and weights during AT to regularize the flatness of weight loss landscape. The intriguing property of robust overfitting has motivated great amount of study and investigation, but current works typically approach the phenomenon considering a DNN as a whole. In contrast, our work treats a DNN as a series of layers and reveals a strong connection between robust overfitting and the optimization of the latter layers, providing a novel perspective into better understanding the phenomenon. 3 INTRIGUING PROPERTIES OF ROBUST OVERFITTING In this section, we first investigate the layer-wise properties of robust overfitting by fixing model parameters in AT (Section 3.1). Based on our observations, we further propose a robust adversarial training (RAT) prototype to eliminate robust overfitting (Section 3.2). Finally, we design two different realizations for RAT to verify the effectiveness of the proposed method (Section 3.3). 3.1 LAYER-WISE ANALYSIS OF ROBUST OVERFITTING Current works usually study the robust overfitting phenomenon considering the network as a single unit. However, features computed by different layers exhibit different properties, such as first-layer features are general and last-layer features are specific (Yosinski et al., 2014). We hypothesize that different network layers have different effects on robust overfitting. To empirically verify the above hypothesis, we deliberately fix the parameters of the selected network layers, leaving them unoptimized during AT and observe the behavior of robust overfitting accordingly. Specifically, we considered ResNet-18 architecture as a composition of 4 main layers, corresponding to 4 Residual blocks. We then train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their parameter fixed. The robust test performance in figure 1(a) shows a consistent pattern. Robust overfitting is mitigated whenever we fix the parameters for layer 4 during AT, while any settings that do not fix the parameters for layer 4 result in a more severe gap between the best accuracy and the accuracy at the last epoch. For example, for settings such as AT-fix-param-[4], AT-fix-param-[1,4], AT-fix-param-[2,4] and AT-fix-param-[3,4], robust overfitting is significantly reduced. On the other hand, for settings such AT-fix-param-[1,2], AT-fix-param-[1,3] and AT-fix-param-[2,3], when we fix the parameters of various set of layers but allow for the optimization of layer 4, robust overfitting still widely exists. For extreme case like AT-fix-param-[1,2,3], where we fix the first three front layers and only allow for the optimization of that last layer 4, the gap between the best accuracy and the last accuracy is still obvious. This clearly indicates that the optimization of the latter layers present a strong correlation to the robust overfitting phenomenon. Note that this relationship can be observed across a variety of datasets, model architectures, and threat models (shown in Appendix A), indicating that it is a general property in adversarial training. In many of these settings, robust overfitting is mitigated at the cost of robust accuracy. For example in AT-fix-param-[3,4], if we leave both layer 3 & 4 unoptimized, robust overfitting will practically disappear, but the peak performance is much worse compared to standard AT. When carefully examining the training performance in these settings shown in figure 1(b), we generally observe that the network capacity to fit adversarial data is strong when we fix the parameters for the front layers, but it gradually gets weaker as we try to fix the latter layers. For instance, AT-fix-param-[1] has the highest train robust accuracy, then comes AT-fix-param[2], AT-fix-param[3] and AT-fix-param[4]; AT-fix-param[1,2,3] has higher training accuracy than AT-fix-param[2,3,4]. This suggests fixing the latter layers’ parameters can regularize the network better compared to fixing the front layers’s parameters. In the subsequent sections, we will introduce methods that specifically regularize the optimization of the latter layers, so as to mitigate robust overfitting without tradeoffs in robustness. We will compare the impact on robust overfitting when applied such methods on the front layers vs the latter layers, further highlighting the importance of the latter layers in relation to robust overfitting. 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[1] AT_fix_param_[2] AT_fix_param_[3] AT_fix_param_[4] 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[1, 2] AT_fix_param_[1, 3] AT_fix_param_[1, 4] 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[2, 3] AT_fix_param_[2, 4] AT_fix_param_[3, 4] 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[1, 2, 3] AT_fix_param_[1, 2, 4] AT_fix_param_[1, 3, 4] AT_fix_param_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) Robust Test Performance 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standard AT_fix_param_[1] AT_fix_param_[2] AT_fix_param_[3] AT_fix_param_[4] 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standard AT_fix_param_[1, 2] AT_fix_param_[1, 3] AT_fix_param_[1, 4] 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standardAT_fix_param_[2, 3] AT_fix_param_[2, 4] AT_fix_param_[3, 4] 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standardAT_fix_param_[1, 2, 3] AT_fix_param_[1, 2, 4] AT_fix_param_[1, 3, 4] AT_fix_param_[2, 3, 4] Epochs Tr ai n Ro bu st A cc ur ac y (b) Robust Train Performance Figure 1: The robust train/test performance of adversarial training with different sets of network layers fixed. AT-fix-param[1,2] corresponds to fixing the parameters of layers 1 & 2 during AT 3.2 A PROTOTYPE OF RAT As witnessed in Section 3.1, the optimization of AT in the latter layers is highly correlated to the existence of robust overfitting. To address this, we propose to train the network on adversarial data with some restrictions put onto the optimization of the latter layers, dubbed as Robust Adversarial Training (RAT). RAT adopts additional measures to regularize the optimization of the latter layers, and ensures that robust overfitting will not occur. The RAT prototype is given in Algorithm 1. It runs as follows. We start with a base adversarial training algorithm A. In Line 1-3, The inner maximization pass aims to maximize the loss via creating adversarial examples, and then the outer minimization pass updates the weight by minimizing the loss on adversarial data. Line 4 initiates a loop through all parts of the weight w from the front layers to the latter layers. Line 5-9 then manipulate different parts of the weight based on its layer conditions. If the parts of the weight belong to the front layers (Cfront), they will be kept intact. Otherwise, a weight update scheme S is put onto the parts of the weight corresponding to the latter layers (Clatter). The role of S is to apply some regularization on the latter layers’ weight. Finally, the optimizer O updates the model fw in Line 11. Note that RAT is a general prototype where layer conditions Cfront, Clatter and weight adjustment strategy S can be versatile. For example, based on the observations in Section 3.1, we treat the Res-Net architecture as a composition of 4 main layers, corresponding to 4 residual blocks, where Cfront indicates layer 1 & 2 and Clatter indicates layer 3 & 4. S can also represent various strategies that serves to regularize the optimization of the latter layers. In the section below, we will propose two different strategies S in the implementations of RAT to demonstrate RAT’s effectiveness. 3.3 TWO REALIZATIONS OF RAT In this section, we will propose two different methods to put certain restrictions on the optimization of selected parts of the network, and then investigate the robust overfitting behavior upon applying such method to the front layers vs the latter layers. These methods showcase a clear relation between the optimization of the latter layers and robust generalization gap. RAT through enlarging learning rate. In standard AT, the sudden increases in robust test performance appears to be closely related to the drops in the scheduled learning rate decay. We hypothesize Algorithm 1 RAT-prototype (in a mini-batch). Require: base adversarial training algorithm A, optimizer O, network fw, model parameter w = {w1, w2, ..., wn}, training dataD = {(xi, yi)}, mini-batch B, front and latter layer conditions Cfront and Clatter for fw, gradient adjustment strategy S 1: Sample a mini-batch B = {(xi, yi)} from D 2: B′ = A.inner maximization(fw,B) 3: ∇w ← A.outer minimization(fw, ℓB′) 4: for i = 1, ..., n do 5: if Cfront(wi) then 6: ∇wi ← ∇wi 7: else if Clatter(wi) then 8: ∇wi ← S(fw,B′,∇wi) # adjust gradient 9: end if 10: end for 11: O.step(∇w) that training AT without learning rate decays is sub-optimal, which can regularize the learning process of adversarial training. Comparison of the train/test performance between standard AT and AT without learning rate decay (AT-fix-lr-[1,2,3,4]) are shown in figure 2(b). Training performance of standard AT accelerates quickly right after the first learning rate drop, expanding the generalization gap with further training, whereas for AT without learning rate decay, training performance increases slowly and maintain a stable generalization gap. This suggests that AT optimized without learning rate decay has less capacity to fit adversarial data, and thus provides the regularization needed to relieve robust overfitting. As our previous analysis suggests that the optimization of the latter layers is more important in mitigating robust overfitting, we propose using a fixed learning rate = 0.1 for optimizing the latter parts of the network while applying the piecewise decay learning rate for the former parts to close the robust generalization gap. We refer to this approach as a realization of RAT, namely RATLR. Compared to standard AT, RATLR essentially enlarge the weight update step ∇wi along the latter parts of the gradients by 10 at the first learning rate decay and 100 at the second decay. ∇wi = η∇wi , (2) where η is the amplification coefficient. To demonstrate the effectiveness of RATLR, we train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their learning rate fixed to 0.1 while maintaining the piece-wise learning rate schedule for other layers. Figure 2(a) validate our proposition. Robust overfitting is relieved for all settings that target layers that include layer 4 (AT-fix-lr-[4], AT-fix-lr-[1,4], AT-fix-lr-[2,4], etc.) while any settings that fix the learning rate of layers that exclude layer 4 do not reduce robust overfitting. Furthermore, all settings that fix the learning rate for both layer 3 & 4, including AT-fix-lr-[3,4], AT-fix-lr-[1,3,4], AT-fix-lr-[2,3,4] AT-fix-lr-[1,2,3,4] completely eliminate robust overfitting. The observations verify that regularizing the optimization of the latter layers by optimizing those layers without learning rate decays can prevent robust overfitting from occurring. An important observation is that RATLR (AT-fix-lr-[3,4]) can both overcome robust overfitting and achieve better robust test performance compared to the network using a fixed learning rate for all layers (AT-fix-lr-[1,2,3,4]). Examining the training performance between these two settings in figure 2(c), we find that RATLR exhibits a rapid rise in both robust and standard training performance immediately after the first learning rate decay similar to standard AT. The training performance of RATLR is able to benefit from the learning rate decay occurring at layer 1 & 2, making a notable improvement compared to AT-fix-lr-[1,2,3,4]. By training layers 3 & 4 without learning rate decays, we specifically put some restrictions on the optimization of only the latter parts of the network heavily responsible for robust overfitting, which can relieve robust overfitting without sacrificing too much performance. The experiment results provide another indication that the latter layers have stronger connections to robust overfitting than the front layers do, and regularizing the optimization of the latter layers from the perspective of learning rate can effectively solve robust overfitting. 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) Robust test performance of all settings RAT through adversarial weight pertubation. We continue to study the impact of different network layers to robust overfitting phenomenon from the perspective of adversarial weight perturbation (AWP). Wu et al. (2020) proposes AWP as a method to explicitly flatten weight loss landscape, by introducing adversarial perturbations into both inputs and weights during AT: min w max v∈V ∑ i max d(xi,x′i)≤ϵ ℓ(fw+v(x ′ i), yi), (3) where v is the adversarial weight perturbation generated by maximizing the classification loss: v = ∇w ∑ i ℓi. (4) As AWP keeps injecting the worst-case perturbations on weight during training, it could also be viewed as a means to regularize the optimization of AT. In fact, the training of AWP exhibits a negative robust generalization gap, where robust training accuracy is in short of robust testing accuracy by a large margin, shown in figure 3(c). This indicates AWP put significant restrictions to the optimization of AT, introducing huge trade-offs to training performance. As our previous analysis suggests a strong correlation between robust overfitting and the optimization of the latter layers, we argue that the capacity to mitigate robust overfitting from AWP is mostly thanks to the perturbations occurring at latter layers’ weight. As such, we propose to specifically apply AWP to the latter half of the network, and refer to this method as RATWP. In essence, RATWP compute the adversarial weight perturbation vi under the layer condition Clatter(wi), so that only the parts of the weight along the latter half of the network are perturbed. min w=[w1,...,wi,...,wn] max v=[0,...,vi,...0]∈V ∑ i max d(xi,x′i)≤ϵ ℓ(fw+v(x ′ i), yi), (5) vi = ∇wi ∑ i ℓi. (6) To prove the effectiveness of RATWP , we train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their weight locally 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) Robust test performance of all settings perturbed using AWP. As seen from figure 3(a), There are only 3 settings that can overcome robust overfitting, namely AT-AWP-[3,4], AT-AWP-[1,3,4] and AT-AWP-[2,3,4]. These settings share one key similarity: both layer 3&4 have their weight adversarially perturbed during AT. Simply applying AWP to any set of layers that exclude layers 3&4 is not sufficient to eliminate robust overfitting. This shows that AWP is effective in solving robust overfitting only when applied to both layer 3 and layer 4. Even when AWP is applied to the first 3 former layers out of 4 layers (AT-awp[1,2,3]), robust overfitting still widely exists. In another word, it is essential for the adversarial weight perturbations to occur at the latter part of the network in order to mitigate robust overfitting. To examine this phenomenon in detail, we compare the training performance of AWP applied to front layers (represented by AT-AWP-[1,2,3]) vs AWP applied to latter layers (represented by ATAWP-[3,4]), shown in figure 3(b). AWP applied in the front layers have a much better training performance than AWP applied in the latter layers. Furthermore, AWP applied to front layers reveals a positive robust generalization gap (training accuracy > testing accuracy) shortly after the first drop in learning rate, which continues to widen with further training. Conversely, AWP applied in the latter layers exhibits a negative robust generalization gap throughout most of the training, only converging to 0 after the second drop in learning rate. These differences demonstrate that worst-case perturbations, when injected into the latter layers’ weights, have a more powerful impact in regularizing the optimization of AT. Consistent with our previous findings, AWP applied to the latter layers can be considered as an approach to regularize the optimization of AT in those layers, which successfully mitigates robust overfitting. This finding supports our analysis thus far, further demonstrating that regularizing the optimization of the latter layers is key to improving the robust generalization. 4 EXPERIMENT In this section, we conduct extensive experiments to verify the effectiveness of RATLR and RATWP. Details of the experiment settings and performance evaluation are introduced below. 4.1 EXPERIMENTAL SETUP We conduct extensive experiments on two realizations of RAT across three benchmark datasets (CIFAR10 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011) and CIFAR100 (Krizhevsky et al., 2009)) and two threat models (L∞ and L2). We use PreAct ResNet-18 He et al. (2016) and Wide ResNet-34-10 following the same hyperparameter settings for AT in Rice et al. (2020): for L∞ threat model, ϵ = 8/255, step size is 1/255 for SVHN, and 2/255 for CIFAR-10 and CIFAR-100; for L2 threat model, ϵ = 128/255, step size is 15/255 for all datasets. For training, all models are trained under 10-step PGD (PGD-10) attack for 200 epochs using SGD with momentum 0.9, weight decay 5 × 10−4, and a piecewise learning rate schedule with an initial learning rate of 0.1. RAT models are decomposed into a series of 4 main layers, corresponding to 4 residual blocks of the ResNet architecture. For RATLR, learning rate for layer 3&4 are set to a fixed value of 0.1. For RATWP leveraging AWP in layer 3&4, γ = 1 × 10−2. For testing, the robustness accuracy is evaluated under two different adversarial attacks, including 20-step PGD (PGD-20) and Auto Attack (AA) Croce & Hein (2020b). Auto Attack is considered the most reliable robustness evaluation to date, which is an ensemble of complementary attacks, consisting of three white-box attacks (APGD-CE (Croce & Hein, 2020b), APGD-DLR (Croce & Hein, 2020b), and FAB (Croce & Hein, 2020a)) and a black-box attack (Square Attack (Andriushchenko et al., 2020)) 4.2 PERFORMANCE EVALUATION In this section, we present the experimental results of RATLR and RATWP across three benchmark datasets CIFAR10 Results. The evaluation results on CIFAR10 dataset are summarized in Table 1, where “Best” is the highest test robustness achieved during training; “Last” is the test robustness at the last epoch checkpoint; “Diff” denotes the robust accuracy gap between the “Best” & “Last”. It is observed that RATWP generally achieves the best robust performance compared to RATLR & standard AT. Regardless, both RATLR and RATWP tighten the robustness gaps by a significant margin, indicating they can effectively suppress robust overfitting. CIFAR100 Results. We also show the results on CIFAR100 dataset in Table 2. We observe similar performance like CIFAR10, where both RATLR and RATWP is able to significantly reduce the robustness gaps. For robustness improvement, RATWP stands out to be the leading method. The results further verify the effectiveness of the proposed approach. SVHN Results. Finally, we summarize the results on the SVHN dataset in Table 3, where robustness gap are also narrowed down to a small margin by RATWP. SVHN dataset is a special case where RATLR strategy does not improve robust overfitting. Unlike CIFAR10 and CIFAR100, learning rate decay in SVHN’s training does not have much connection to the sudden increases in robust test performance or the prevalence of robust overfitting, and hence makes RATLR ineffective. Other than this, The improvement in robust generalization gaps can be witnessed in all cases, demonstrating the proposed approachs are generic and can be applied widely. 5 CONCLUSION In this paper, we investigate the effects of different network layers on robust overfitting and identify that robust overfitting is mainly driven by the optimization occurred at the latter layers. Following this, we propose a robust adversarial training (RAT) prototype to specifically hinder the optimization of the latter layers in the process of training adversarial network. The approach prevents the model from overfitting the latter parts of the network, which effectively eliminate robust overfitting of the network as a whole. We then further demonstrate two implementations of RAT: one locally uses a fixed learning rate for the latter layers and the other utilize adversarial weight perturbation for the latter layers. Extensive experiments show the effectiveness of both approaches, suggesting RAT is generic and can be applied across different network architectures, threat models and benchmark datasets to solve robust overfitting. A MORE EVIDENCES FOR THE LAYER-WISE PROPERTIES OF ROBUST OVERFITTING In this section, we provide more empirical experiments to showcase the layer-wise properties of robust overfitting across different datasets, model architectures and threat models. Specifically, we use two strategies mentioned in Section 3.3 to put restriction on the optimization of different network layers. We can always observe that there is no robust overfitting when we regularize the optimization of layers 3 and 4 (the latter layers), while robust overfitting is prevalent for other settings. These evidences further highlight the strong relation between robust overfitting and the optimization of the latter layers. A.1 EVIDENCES ACROSS DATASETS We show that the layer-wise properties of robust overfitting is universal across datasets on CIFAR100 and SVHN. We adversarially train PreAct ResNet-18 under l∞ threat model on different datasets with the same settings as Section 3.3. The results are shown in Figure 4 and 5. Note that for SVHN, regularization strategy utilizing a fixed learning rate (RATLR) for does not improve robust overfitting (Figure 4). Unlike CIFAR10 and CIFAR100, SVHN’s training overfits way before the first learning rate decay. Also, learning rate decay in SVHN’s training does not have any relation to the sudden increases in robust test performance or the appearance of robust overfitting. Hence, SVHN dataset is a special case where RATLR does not apply. For all other cases, robust overfitting is effectively eliminated by regularizing the optimization of layers 3 and 4. A.2 EVIDENCES ACROSS THREAT MODELS We further demonstrate that the generality of layer-wise properties of robust overfitting by conducting experiments under l2 threat model across datasets. The settings are the same as Section 3.3. The results are shown in Figure 6 and 7. Under l2 threat model, except for SVHN dataset where regularization strategy utilizing a fixed learning rate (RATLR) does not apply, robust overfitting is effectively eliminated by regularizing the optimization of layers 3 and 4. 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) CIFAR10 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (b) CIFAR100 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (c) SVHN Figure 6: Robust test performance of adversarial training using a fixed learning rate for different sets of network layers, across datasets (CIFAR-10, CIFAR-100 and SVHN) under l2 threat 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) CIFAR10 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (b) CIFAR100 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (c) SVHN Figure 7: Robust test performance of adversarial training applying AWP for different sets of network layers, across datasets (CIFAR10, CIFAR-100 and SVHN) under l2 threat
1. What is the focus of the paper regarding adversarial training and deep neural networks? 2. What are the strengths and weaknesses of the proposed methods RAT-LR and RAT-AWP? 3. How does the reviewer assess the contribution and novelty of the techniques presented in the paper? 4. Are there any concerns regarding the empirical study presented in the paper? 5. How does the reviewer evaluate the effectiveness of AWP-based regularization in mitigating robust overfitting? 6. Do you have any suggestions for improving the paper's content or presentation?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In this work, the authors study the phenomenon of robust overfitting during adversarial training (AT) of deep neural networks — specifically about the deviations that arise due to optimization at different layers of these networks. The paper demonstrates that if the deeper layers of the network are frozen or optimized with a lower learning rate during AT, robust overfitting is significantly reduced, albeit the accompaniment of reduction in test robustness as well, in contrast to that seen if the same is performed with the earlier layers. A similar phenomenon is seen with the Adversarial Weight Perturbation (AWP) regularizer applied to different layers, where it is seen that latter layers contribute more to robust overfitting. Strengths And Weaknesses Strengths: The paper is well-written, and presents a sequence of empirical evaluations in a clear, concise manner to help understand the effect of optimization of different layers on robust overfitting. The proposed methods RAT-LR and RAT-AWP are fairly simple: RAT-LR applies a fixed learning rate to the deeper layers, while RAT-AWP in a similar vein applies AWP solely to the deeper layers, to impose additional regularization for the latter part of the network alone. Weaknesses: The contributions and novelty of the techniques presented are fairly limited, given that the two primary methods proposed are to either keep learning rate fixed or apply AWP for latter layers, instead of applying the same to the entire network as a whole. Furthermore, the deeper layers/blocks are known to have many more parameters (especially for ResNet based models), thus it is not entirely surprising that the latter layers contribute more to overfitting. This aspect could have been analyzed in much more detail in the paper, to study of depth vs parameter count on robust overfitting. The analysis presented is also fairly focussed to the very specific case of step-decay of learning rate as well, when in practice several recent works on robust defenses utilize other learning rate schedules such as cyclic, cosine, annealed cosine etc. Furthermore, the effect of the magnitude of the learning rate drop, the spacing of drop points, effect on deeper layers, only FC layer etc. could have been studied. Since the paper does not present theoretical viewpoints of the problem (which is certainly fine on its own), the extent of empirical study could have been significantly broadened to better understand robust overfitting in slightly more generalized settings. This is further reinforced given the fact that AWP based regularization on its own is seen to drastically reduce robust overfitting, and appears to be the best performing method as seen from Figure 3 (a) and 3(c). Thus the contributions of the paper become slightly unclear in this setting, over and above AWP based training alone. Given the excellent performance of AWP-AT and its effect in mitigating robust overfitting, the baseline AWP method needs to be included in the main empirical evaluations as presented in Tables-1,2, since it is well known that far out-performs the standard AT baseline presented. From the original AWP paper, WideResNet-34-10 models trained on CIFAR-10 with AWP-AT achieves 54.04% and AWP-TRADES achieves 56.17% AutoAttack accuracy, while the proposed RAT-AWP-AT achieves 54.46%. Thus, the improvements in robust performance, or noteworthy contributions over the AWP method are not clearly seen. Further, the clean accuracy of models need to be reported alongside the robust accuracies in the same tables, due to the well-known robustness-accuracy tradeoff. Thus, it is difficult to judge if a given method with slightly higher robust accuracy is inherently better, if it is accompanied by a large decrease in clean accuracy. Clarity, Quality, Novelty And Reproducibility The paper is presented overall in a clear, concise manner. The contributions and novelty are however fairly limited, given that the primary methods proposed are limited to training with a fixed learning rate or the application of AWP for latter layers alone.
ICLR
Title On Intriguing Layer-Wise Properties of Robust Overfitting in Adversarial Training Abstract Adversarial training has proven to be one of the most effective methods to defend against adversarial attacks. Nevertheless, robust overfitting is a common obstacle in adversarial training of deep networks. There is a common belief that the features learned by different network layers have different properties, however, existing works generally investigate robust overfitting by considering a DNN as a single unit and hence the impact of different network layers on robust overfitting remains unclear. In this work, we divide a DNN into a series of layers and investigate the effect of different network layers on robust overfitting. We find that different layers exhibit distinct properties towards robust overfitting, and in particular, robust overfitting is mostly related to the optimization of latter parts of the network. Based upon the observed effect, we propose a robust adversarial training (RAT) prototype: in a minibatch, we optimize the front parts of the network as usual, and adopt additional measures to regularize the optimization of the latter parts. Based on the prototype, we designed two realizations of RAT, and extensive experiments demonstrate that RAT can eliminate robust overfitting and boost adversarial robustness over the standard adversarial training. 1 INTRODUCTION Deep neural networks (DNNs) have been widely applied in multiple fields, such as computer vision (He et al., 2016) and natural language processing (Devlin et al., 2018). Despite its achieved success, recent studies show that DNNs are vulnerable to adversarial examples. Well-constructed perturbations on the input images that are imperceptible to human’s eyes can make DNNs lead to a completely different prediction (Szegedy et al., 2013). The security concern due to this weakness of DNNs has led to various works in the study of improving DNNs robustness against adversarial examples. Across existing defense techniques thus far, Adversarial Training (AT) (Goodfellow et al., 2014; Madry et al., 2017), which optimizes DNNs with adversarially perturbed data instead of natural data, is the most effective approach (Athalye et al., 2018). However, it has been shown that networks trained by AT technique do not generalize well (Rice et al., 2020). After a certain point in AT, immediately after the first learning rate decay, the robust test accuracy continues to decrease with further training. Typical regularization practices to mitigate overfitting such as l1 & l2 regularization, weight decay, data augmentation, etc. are reported to be as inefficient compared to simple early stopping (Rice et al., 2020). Many studies have attempted to improve the robust generalization gap in AT, and most have generally investigated robust overfitting by considering DNNs as whole. However, DNNs trained on natural images exhibit a common phenomenon: features obtained in the first layers appear to be general and applicable widespread, while features computed by the last layers are dependent on a particular dataset and task (Yosinski et al., 2014). Such behavior of DNNs sparks a question: Do different layers contribute differently to robust overfitting? Intuitively, robust overfitting acts as an unexpected optimization state in adversarial training, and its occurrence may be closely related to the entire network. Nevertheless, the unique effect of different network layers on robust overfitting is still unclear. Without a detailed understanding of the layer-wise mechanism of robust overfitting, it is difficult to completely demystify the exact underlying cause of the robust overfitting phenomenon. In this paper, we provide the first layer-wise diagnosis of robust overfitting. Specifically, instead of considering the network as a whole, we treat the network as a composition of layers and sys- tematically investigate the impact of robust overfitting phenomenon on different layers. To do this, we first fix the parameters for the selected layers, leaving them unoptimized during AT, and then normally optimize other layer parameters. We discovered that robust overfitting is always mitigated in the case where the latter layers are left unoptimized, and applying the same effect to other layers is futile for robust overfitting, suggesting a strong connection between the optimization of the latter layers and the overfitting phenomenon. Based upon the observed effect, we propose a robust adversarial training (RAT) prototype to relieve the issue of robust overfitting. Specifically, RAT works in each mini-batch: it optimizes the front layers as usual, and for the latter layers, it implements additional measures on these parameters to regularize their optimization. It is a general adversarial training prototype, where the front and latter network layers can be separated by some simple test experiments, and the implementation of additional measures to regularize network layer optimization can be versatile. For instance, we designed two representative methods for the realizations of RAT: RATLR and RATWP. They adopt different strategies to hinder weight update, e.g., enlarging the learning rate and weight perturbation, respectively. Extensive experiments show that the proposed RAT prototype effectively eliminates robust overfitting. The contributions of this work are summarized as follows: • We provide the first diagnosis of robust overfitting on different network layers, and find that there is a strong connection between the optimization of the latter layers and the robust overfitting phenomenon. • Based on the observed properties of robust overfitting, we propose the RAT prototype, which adopts additional measures to regularize the optimization of the latter layers and is tailored to prevent robust overfitting. • We design two different realizations of RAT, with extensive experiments on a number of standard benchmarks, verifying its effectiveness. 2 RELATED WORK 2.1 ADVERSARIAL TRAINING Since the discovery of adversarial examples, there have been many defensive methods attempted to improve the DNN’s robustness against such adversaries, such as adversarial training (Madry et al., 2017), defense distillation (Papernot et al., 2016), input denoising (Liao et al., 2018), gradient regularization (Tramèr et al., 2018). So far, adversarial training (Madry et al., 2017) has proven to be the most effective method. Adversarial training comprises two optimization problems: the inner maximization and outer minimization. The first one constructs adversarial examples by maximizing the loss and the second updates the weight by minimizing the loss on adversarial data. Here, fw is the DNN classifier with weight w, and ℓ(·) is the loss function. d(., .) specify the distance between original input data xi and adversarial data x′i, which is usually an lp-norm ball such as the l2 and l∞-norm balls and ϵ is the maximum perturbation allowed. ℓAT(w) = min w ∑ i max d(xi,x′i)≤ϵ ℓ(fw(x ′ i), yi), (1) 2.2 ROBUST GENERALIZATION An interesting characteristic of deep neutral networks (DNNs) is their ability to generalize well in practice (Belkin et al., 2019). For the standard training setting, it is observed that test loss continues to decrease for long periods of training (Nakkiran et al., 2020), thus the common practice is to train DNNs for as long as possible. However, this is no longer the case in adversarial training, which exhibits overfitting behavior the longer the training process (Rice et al., 2020). This phenomenon has been referred to as ”robust overfitting” and has shown strong resistance to standard regularization techniques such as l1, l2 regularization and data augmentation methods. (Rice et al., 2020) Schmidt et al. (2018) theorizes that robust generalization have a large sample complexity, which requires substantially larger dataset. Many subsequent works have empirically validated such claim, such as AT with semi-supervised learning (Carmon et al., 2019; Zhai et al., 2019), robust local feature (Song et al., 2020) and data interpolation (Lee et al., 2020; Chen et al., 2021). (Chen et al., 2020) proposes to combine smoothing the logits via self-training and smoothing the weight via stochastic weight averaging to mitigate robust overfitting. Wu et al. (2020) emphasizes the connection of weight loss landscape and robust generalization gap, and suggests injecting the adversarial perturbations into both inputs and weights during AT to regularize the flatness of weight loss landscape. The intriguing property of robust overfitting has motivated great amount of study and investigation, but current works typically approach the phenomenon considering a DNN as a whole. In contrast, our work treats a DNN as a series of layers and reveals a strong connection between robust overfitting and the optimization of the latter layers, providing a novel perspective into better understanding the phenomenon. 3 INTRIGUING PROPERTIES OF ROBUST OVERFITTING In this section, we first investigate the layer-wise properties of robust overfitting by fixing model parameters in AT (Section 3.1). Based on our observations, we further propose a robust adversarial training (RAT) prototype to eliminate robust overfitting (Section 3.2). Finally, we design two different realizations for RAT to verify the effectiveness of the proposed method (Section 3.3). 3.1 LAYER-WISE ANALYSIS OF ROBUST OVERFITTING Current works usually study the robust overfitting phenomenon considering the network as a single unit. However, features computed by different layers exhibit different properties, such as first-layer features are general and last-layer features are specific (Yosinski et al., 2014). We hypothesize that different network layers have different effects on robust overfitting. To empirically verify the above hypothesis, we deliberately fix the parameters of the selected network layers, leaving them unoptimized during AT and observe the behavior of robust overfitting accordingly. Specifically, we considered ResNet-18 architecture as a composition of 4 main layers, corresponding to 4 Residual blocks. We then train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their parameter fixed. The robust test performance in figure 1(a) shows a consistent pattern. Robust overfitting is mitigated whenever we fix the parameters for layer 4 during AT, while any settings that do not fix the parameters for layer 4 result in a more severe gap between the best accuracy and the accuracy at the last epoch. For example, for settings such as AT-fix-param-[4], AT-fix-param-[1,4], AT-fix-param-[2,4] and AT-fix-param-[3,4], robust overfitting is significantly reduced. On the other hand, for settings such AT-fix-param-[1,2], AT-fix-param-[1,3] and AT-fix-param-[2,3], when we fix the parameters of various set of layers but allow for the optimization of layer 4, robust overfitting still widely exists. For extreme case like AT-fix-param-[1,2,3], where we fix the first three front layers and only allow for the optimization of that last layer 4, the gap between the best accuracy and the last accuracy is still obvious. This clearly indicates that the optimization of the latter layers present a strong correlation to the robust overfitting phenomenon. Note that this relationship can be observed across a variety of datasets, model architectures, and threat models (shown in Appendix A), indicating that it is a general property in adversarial training. In many of these settings, robust overfitting is mitigated at the cost of robust accuracy. For example in AT-fix-param-[3,4], if we leave both layer 3 & 4 unoptimized, robust overfitting will practically disappear, but the peak performance is much worse compared to standard AT. When carefully examining the training performance in these settings shown in figure 1(b), we generally observe that the network capacity to fit adversarial data is strong when we fix the parameters for the front layers, but it gradually gets weaker as we try to fix the latter layers. For instance, AT-fix-param-[1] has the highest train robust accuracy, then comes AT-fix-param[2], AT-fix-param[3] and AT-fix-param[4]; AT-fix-param[1,2,3] has higher training accuracy than AT-fix-param[2,3,4]. This suggests fixing the latter layers’ parameters can regularize the network better compared to fixing the front layers’s parameters. In the subsequent sections, we will introduce methods that specifically regularize the optimization of the latter layers, so as to mitigate robust overfitting without tradeoffs in robustness. We will compare the impact on robust overfitting when applied such methods on the front layers vs the latter layers, further highlighting the importance of the latter layers in relation to robust overfitting. 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[1] AT_fix_param_[2] AT_fix_param_[3] AT_fix_param_[4] 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[1, 2] AT_fix_param_[1, 3] AT_fix_param_[1, 4] 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[2, 3] AT_fix_param_[2, 4] AT_fix_param_[3, 4] 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[1, 2, 3] AT_fix_param_[1, 2, 4] AT_fix_param_[1, 3, 4] AT_fix_param_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) Robust Test Performance 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standard AT_fix_param_[1] AT_fix_param_[2] AT_fix_param_[3] AT_fix_param_[4] 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standard AT_fix_param_[1, 2] AT_fix_param_[1, 3] AT_fix_param_[1, 4] 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standardAT_fix_param_[2, 3] AT_fix_param_[2, 4] AT_fix_param_[3, 4] 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standardAT_fix_param_[1, 2, 3] AT_fix_param_[1, 2, 4] AT_fix_param_[1, 3, 4] AT_fix_param_[2, 3, 4] Epochs Tr ai n Ro bu st A cc ur ac y (b) Robust Train Performance Figure 1: The robust train/test performance of adversarial training with different sets of network layers fixed. AT-fix-param[1,2] corresponds to fixing the parameters of layers 1 & 2 during AT 3.2 A PROTOTYPE OF RAT As witnessed in Section 3.1, the optimization of AT in the latter layers is highly correlated to the existence of robust overfitting. To address this, we propose to train the network on adversarial data with some restrictions put onto the optimization of the latter layers, dubbed as Robust Adversarial Training (RAT). RAT adopts additional measures to regularize the optimization of the latter layers, and ensures that robust overfitting will not occur. The RAT prototype is given in Algorithm 1. It runs as follows. We start with a base adversarial training algorithm A. In Line 1-3, The inner maximization pass aims to maximize the loss via creating adversarial examples, and then the outer minimization pass updates the weight by minimizing the loss on adversarial data. Line 4 initiates a loop through all parts of the weight w from the front layers to the latter layers. Line 5-9 then manipulate different parts of the weight based on its layer conditions. If the parts of the weight belong to the front layers (Cfront), they will be kept intact. Otherwise, a weight update scheme S is put onto the parts of the weight corresponding to the latter layers (Clatter). The role of S is to apply some regularization on the latter layers’ weight. Finally, the optimizer O updates the model fw in Line 11. Note that RAT is a general prototype where layer conditions Cfront, Clatter and weight adjustment strategy S can be versatile. For example, based on the observations in Section 3.1, we treat the Res-Net architecture as a composition of 4 main layers, corresponding to 4 residual blocks, where Cfront indicates layer 1 & 2 and Clatter indicates layer 3 & 4. S can also represent various strategies that serves to regularize the optimization of the latter layers. In the section below, we will propose two different strategies S in the implementations of RAT to demonstrate RAT’s effectiveness. 3.3 TWO REALIZATIONS OF RAT In this section, we will propose two different methods to put certain restrictions on the optimization of selected parts of the network, and then investigate the robust overfitting behavior upon applying such method to the front layers vs the latter layers. These methods showcase a clear relation between the optimization of the latter layers and robust generalization gap. RAT through enlarging learning rate. In standard AT, the sudden increases in robust test performance appears to be closely related to the drops in the scheduled learning rate decay. We hypothesize Algorithm 1 RAT-prototype (in a mini-batch). Require: base adversarial training algorithm A, optimizer O, network fw, model parameter w = {w1, w2, ..., wn}, training dataD = {(xi, yi)}, mini-batch B, front and latter layer conditions Cfront and Clatter for fw, gradient adjustment strategy S 1: Sample a mini-batch B = {(xi, yi)} from D 2: B′ = A.inner maximization(fw,B) 3: ∇w ← A.outer minimization(fw, ℓB′) 4: for i = 1, ..., n do 5: if Cfront(wi) then 6: ∇wi ← ∇wi 7: else if Clatter(wi) then 8: ∇wi ← S(fw,B′,∇wi) # adjust gradient 9: end if 10: end for 11: O.step(∇w) that training AT without learning rate decays is sub-optimal, which can regularize the learning process of adversarial training. Comparison of the train/test performance between standard AT and AT without learning rate decay (AT-fix-lr-[1,2,3,4]) are shown in figure 2(b). Training performance of standard AT accelerates quickly right after the first learning rate drop, expanding the generalization gap with further training, whereas for AT without learning rate decay, training performance increases slowly and maintain a stable generalization gap. This suggests that AT optimized without learning rate decay has less capacity to fit adversarial data, and thus provides the regularization needed to relieve robust overfitting. As our previous analysis suggests that the optimization of the latter layers is more important in mitigating robust overfitting, we propose using a fixed learning rate = 0.1 for optimizing the latter parts of the network while applying the piecewise decay learning rate for the former parts to close the robust generalization gap. We refer to this approach as a realization of RAT, namely RATLR. Compared to standard AT, RATLR essentially enlarge the weight update step ∇wi along the latter parts of the gradients by 10 at the first learning rate decay and 100 at the second decay. ∇wi = η∇wi , (2) where η is the amplification coefficient. To demonstrate the effectiveness of RATLR, we train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their learning rate fixed to 0.1 while maintaining the piece-wise learning rate schedule for other layers. Figure 2(a) validate our proposition. Robust overfitting is relieved for all settings that target layers that include layer 4 (AT-fix-lr-[4], AT-fix-lr-[1,4], AT-fix-lr-[2,4], etc.) while any settings that fix the learning rate of layers that exclude layer 4 do not reduce robust overfitting. Furthermore, all settings that fix the learning rate for both layer 3 & 4, including AT-fix-lr-[3,4], AT-fix-lr-[1,3,4], AT-fix-lr-[2,3,4] AT-fix-lr-[1,2,3,4] completely eliminate robust overfitting. The observations verify that regularizing the optimization of the latter layers by optimizing those layers without learning rate decays can prevent robust overfitting from occurring. An important observation is that RATLR (AT-fix-lr-[3,4]) can both overcome robust overfitting and achieve better robust test performance compared to the network using a fixed learning rate for all layers (AT-fix-lr-[1,2,3,4]). Examining the training performance between these two settings in figure 2(c), we find that RATLR exhibits a rapid rise in both robust and standard training performance immediately after the first learning rate decay similar to standard AT. The training performance of RATLR is able to benefit from the learning rate decay occurring at layer 1 & 2, making a notable improvement compared to AT-fix-lr-[1,2,3,4]. By training layers 3 & 4 without learning rate decays, we specifically put some restrictions on the optimization of only the latter parts of the network heavily responsible for robust overfitting, which can relieve robust overfitting without sacrificing too much performance. The experiment results provide another indication that the latter layers have stronger connections to robust overfitting than the front layers do, and regularizing the optimization of the latter layers from the perspective of learning rate can effectively solve robust overfitting. 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) Robust test performance of all settings RAT through adversarial weight pertubation. We continue to study the impact of different network layers to robust overfitting phenomenon from the perspective of adversarial weight perturbation (AWP). Wu et al. (2020) proposes AWP as a method to explicitly flatten weight loss landscape, by introducing adversarial perturbations into both inputs and weights during AT: min w max v∈V ∑ i max d(xi,x′i)≤ϵ ℓ(fw+v(x ′ i), yi), (3) where v is the adversarial weight perturbation generated by maximizing the classification loss: v = ∇w ∑ i ℓi. (4) As AWP keeps injecting the worst-case perturbations on weight during training, it could also be viewed as a means to regularize the optimization of AT. In fact, the training of AWP exhibits a negative robust generalization gap, where robust training accuracy is in short of robust testing accuracy by a large margin, shown in figure 3(c). This indicates AWP put significant restrictions to the optimization of AT, introducing huge trade-offs to training performance. As our previous analysis suggests a strong correlation between robust overfitting and the optimization of the latter layers, we argue that the capacity to mitigate robust overfitting from AWP is mostly thanks to the perturbations occurring at latter layers’ weight. As such, we propose to specifically apply AWP to the latter half of the network, and refer to this method as RATWP. In essence, RATWP compute the adversarial weight perturbation vi under the layer condition Clatter(wi), so that only the parts of the weight along the latter half of the network are perturbed. min w=[w1,...,wi,...,wn] max v=[0,...,vi,...0]∈V ∑ i max d(xi,x′i)≤ϵ ℓ(fw+v(x ′ i), yi), (5) vi = ∇wi ∑ i ℓi. (6) To prove the effectiveness of RATWP , we train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their weight locally 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) Robust test performance of all settings perturbed using AWP. As seen from figure 3(a), There are only 3 settings that can overcome robust overfitting, namely AT-AWP-[3,4], AT-AWP-[1,3,4] and AT-AWP-[2,3,4]. These settings share one key similarity: both layer 3&4 have their weight adversarially perturbed during AT. Simply applying AWP to any set of layers that exclude layers 3&4 is not sufficient to eliminate robust overfitting. This shows that AWP is effective in solving robust overfitting only when applied to both layer 3 and layer 4. Even when AWP is applied to the first 3 former layers out of 4 layers (AT-awp[1,2,3]), robust overfitting still widely exists. In another word, it is essential for the adversarial weight perturbations to occur at the latter part of the network in order to mitigate robust overfitting. To examine this phenomenon in detail, we compare the training performance of AWP applied to front layers (represented by AT-AWP-[1,2,3]) vs AWP applied to latter layers (represented by ATAWP-[3,4]), shown in figure 3(b). AWP applied in the front layers have a much better training performance than AWP applied in the latter layers. Furthermore, AWP applied to front layers reveals a positive robust generalization gap (training accuracy > testing accuracy) shortly after the first drop in learning rate, which continues to widen with further training. Conversely, AWP applied in the latter layers exhibits a negative robust generalization gap throughout most of the training, only converging to 0 after the second drop in learning rate. These differences demonstrate that worst-case perturbations, when injected into the latter layers’ weights, have a more powerful impact in regularizing the optimization of AT. Consistent with our previous findings, AWP applied to the latter layers can be considered as an approach to regularize the optimization of AT in those layers, which successfully mitigates robust overfitting. This finding supports our analysis thus far, further demonstrating that regularizing the optimization of the latter layers is key to improving the robust generalization. 4 EXPERIMENT In this section, we conduct extensive experiments to verify the effectiveness of RATLR and RATWP. Details of the experiment settings and performance evaluation are introduced below. 4.1 EXPERIMENTAL SETUP We conduct extensive experiments on two realizations of RAT across three benchmark datasets (CIFAR10 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011) and CIFAR100 (Krizhevsky et al., 2009)) and two threat models (L∞ and L2). We use PreAct ResNet-18 He et al. (2016) and Wide ResNet-34-10 following the same hyperparameter settings for AT in Rice et al. (2020): for L∞ threat model, ϵ = 8/255, step size is 1/255 for SVHN, and 2/255 for CIFAR-10 and CIFAR-100; for L2 threat model, ϵ = 128/255, step size is 15/255 for all datasets. For training, all models are trained under 10-step PGD (PGD-10) attack for 200 epochs using SGD with momentum 0.9, weight decay 5 × 10−4, and a piecewise learning rate schedule with an initial learning rate of 0.1. RAT models are decomposed into a series of 4 main layers, corresponding to 4 residual blocks of the ResNet architecture. For RATLR, learning rate for layer 3&4 are set to a fixed value of 0.1. For RATWP leveraging AWP in layer 3&4, γ = 1 × 10−2. For testing, the robustness accuracy is evaluated under two different adversarial attacks, including 20-step PGD (PGD-20) and Auto Attack (AA) Croce & Hein (2020b). Auto Attack is considered the most reliable robustness evaluation to date, which is an ensemble of complementary attacks, consisting of three white-box attacks (APGD-CE (Croce & Hein, 2020b), APGD-DLR (Croce & Hein, 2020b), and FAB (Croce & Hein, 2020a)) and a black-box attack (Square Attack (Andriushchenko et al., 2020)) 4.2 PERFORMANCE EVALUATION In this section, we present the experimental results of RATLR and RATWP across three benchmark datasets CIFAR10 Results. The evaluation results on CIFAR10 dataset are summarized in Table 1, where “Best” is the highest test robustness achieved during training; “Last” is the test robustness at the last epoch checkpoint; “Diff” denotes the robust accuracy gap between the “Best” & “Last”. It is observed that RATWP generally achieves the best robust performance compared to RATLR & standard AT. Regardless, both RATLR and RATWP tighten the robustness gaps by a significant margin, indicating they can effectively suppress robust overfitting. CIFAR100 Results. We also show the results on CIFAR100 dataset in Table 2. We observe similar performance like CIFAR10, where both RATLR and RATWP is able to significantly reduce the robustness gaps. For robustness improvement, RATWP stands out to be the leading method. The results further verify the effectiveness of the proposed approach. SVHN Results. Finally, we summarize the results on the SVHN dataset in Table 3, where robustness gap are also narrowed down to a small margin by RATWP. SVHN dataset is a special case where RATLR strategy does not improve robust overfitting. Unlike CIFAR10 and CIFAR100, learning rate decay in SVHN’s training does not have much connection to the sudden increases in robust test performance or the prevalence of robust overfitting, and hence makes RATLR ineffective. Other than this, The improvement in robust generalization gaps can be witnessed in all cases, demonstrating the proposed approachs are generic and can be applied widely. 5 CONCLUSION In this paper, we investigate the effects of different network layers on robust overfitting and identify that robust overfitting is mainly driven by the optimization occurred at the latter layers. Following this, we propose a robust adversarial training (RAT) prototype to specifically hinder the optimization of the latter layers in the process of training adversarial network. The approach prevents the model from overfitting the latter parts of the network, which effectively eliminate robust overfitting of the network as a whole. We then further demonstrate two implementations of RAT: one locally uses a fixed learning rate for the latter layers and the other utilize adversarial weight perturbation for the latter layers. Extensive experiments show the effectiveness of both approaches, suggesting RAT is generic and can be applied across different network architectures, threat models and benchmark datasets to solve robust overfitting. A MORE EVIDENCES FOR THE LAYER-WISE PROPERTIES OF ROBUST OVERFITTING In this section, we provide more empirical experiments to showcase the layer-wise properties of robust overfitting across different datasets, model architectures and threat models. Specifically, we use two strategies mentioned in Section 3.3 to put restriction on the optimization of different network layers. We can always observe that there is no robust overfitting when we regularize the optimization of layers 3 and 4 (the latter layers), while robust overfitting is prevalent for other settings. These evidences further highlight the strong relation between robust overfitting and the optimization of the latter layers. A.1 EVIDENCES ACROSS DATASETS We show that the layer-wise properties of robust overfitting is universal across datasets on CIFAR100 and SVHN. We adversarially train PreAct ResNet-18 under l∞ threat model on different datasets with the same settings as Section 3.3. The results are shown in Figure 4 and 5. Note that for SVHN, regularization strategy utilizing a fixed learning rate (RATLR) for does not improve robust overfitting (Figure 4). Unlike CIFAR10 and CIFAR100, SVHN’s training overfits way before the first learning rate decay. Also, learning rate decay in SVHN’s training does not have any relation to the sudden increases in robust test performance or the appearance of robust overfitting. Hence, SVHN dataset is a special case where RATLR does not apply. For all other cases, robust overfitting is effectively eliminated by regularizing the optimization of layers 3 and 4. A.2 EVIDENCES ACROSS THREAT MODELS We further demonstrate that the generality of layer-wise properties of robust overfitting by conducting experiments under l2 threat model across datasets. The settings are the same as Section 3.3. The results are shown in Figure 6 and 7. Under l2 threat model, except for SVHN dataset where regularization strategy utilizing a fixed learning rate (RATLR) does not apply, robust overfitting is effectively eliminated by regularizing the optimization of layers 3 and 4. 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) CIFAR10 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (b) CIFAR100 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (c) SVHN Figure 6: Robust test performance of adversarial training using a fixed learning rate for different sets of network layers, across datasets (CIFAR-10, CIFAR-100 and SVHN) under l2 threat 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) CIFAR10 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (b) CIFAR100 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (c) SVHN Figure 7: Robust test performance of adversarial training applying AWP for different sets of network layers, across datasets (CIFAR10, CIFAR-100 and SVHN) under l2 threat
1. What is the focus of the paper regarding adversarial training and DNN layers? 2. What are the strengths and weaknesses of the paper's findings and proposed techniques? 3. Are there any concerns regarding the paper's connection to prior works, specifically [1], and its analysis? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What questions do the reviewer have regarding the paper's conclusions and proposed methods?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper investigates the impact of different DNN layers impact the performance of adversarial training (AT). The ablation experiments reveal that the latter (deeper) layers are more influential to the robust generalization gap and the final performance. Two techniques were then proposed to improve the performance of standard AT method via adaptive learning rate or adaptive weight perturbation. Strengths And Weaknesses Strengths: The student of an interesting phenomenon, robust overfitting, in adversarial training. The key findings are interesting and appear quite new to me. The two proposed defenses are validated on multiple datasets. Weaknesses: Missing important analysis and discussion to existing work [1], where a detailed grid search was applied to WideResNet to find that the deeper layers are also more impactful to the final performance of AT. The conclusion is the same as in this paper. And they find that this is because the deep layers are overparameterized and can be simply reduced to mitigate the robust overfitting. This work did a similar analysis as in [1] but using different techniques, i.e., training or not training, hyperparameter ablation, adaptive lr etc. An in-depth analysis should be conducted to connect this work to [1]. Incomplete robustness evaluation with different AT methods. In table 1/2/3, the standard AT is compared with the proposed method. It is thus not clear to me whether other AT methods like TRADES, MART, and AWP are also suffering from robust overfitting, and how the proposed method performs with these AT methods. How the proposed techniques work with data augmentation strategies in [2]. Why latter layers are so special? Any theoretical insights or an in-depth analysis? How can we solve this issue completely? Is it guaranteed to improve the robust accuracy if the latter layers are treated differently, can it improve the current SOTA robustness shown in https://robustbench.github.io/ [1] Huang, Hanxun, et al. "Exploring architectural ingredients of adversarially robust deep neural networks." Advances in Neural Information Processing Systems 34 (2021): 5545-5559. [2] Rebuffi, Sylvestre-Alvise, et al. "Fixing data augmentation to improve adversarial robustness." arXiv preprint arXiv:2103.01946 (2021). Clarity, Quality, Novelty And Reproducibility The paper is well-written and easy to read. The proposed methods are somewhat novel. The proposed methods can be easily reproduced.
ICLR
Title On Intriguing Layer-Wise Properties of Robust Overfitting in Adversarial Training Abstract Adversarial training has proven to be one of the most effective methods to defend against adversarial attacks. Nevertheless, robust overfitting is a common obstacle in adversarial training of deep networks. There is a common belief that the features learned by different network layers have different properties, however, existing works generally investigate robust overfitting by considering a DNN as a single unit and hence the impact of different network layers on robust overfitting remains unclear. In this work, we divide a DNN into a series of layers and investigate the effect of different network layers on robust overfitting. We find that different layers exhibit distinct properties towards robust overfitting, and in particular, robust overfitting is mostly related to the optimization of latter parts of the network. Based upon the observed effect, we propose a robust adversarial training (RAT) prototype: in a minibatch, we optimize the front parts of the network as usual, and adopt additional measures to regularize the optimization of the latter parts. Based on the prototype, we designed two realizations of RAT, and extensive experiments demonstrate that RAT can eliminate robust overfitting and boost adversarial robustness over the standard adversarial training. 1 INTRODUCTION Deep neural networks (DNNs) have been widely applied in multiple fields, such as computer vision (He et al., 2016) and natural language processing (Devlin et al., 2018). Despite its achieved success, recent studies show that DNNs are vulnerable to adversarial examples. Well-constructed perturbations on the input images that are imperceptible to human’s eyes can make DNNs lead to a completely different prediction (Szegedy et al., 2013). The security concern due to this weakness of DNNs has led to various works in the study of improving DNNs robustness against adversarial examples. Across existing defense techniques thus far, Adversarial Training (AT) (Goodfellow et al., 2014; Madry et al., 2017), which optimizes DNNs with adversarially perturbed data instead of natural data, is the most effective approach (Athalye et al., 2018). However, it has been shown that networks trained by AT technique do not generalize well (Rice et al., 2020). After a certain point in AT, immediately after the first learning rate decay, the robust test accuracy continues to decrease with further training. Typical regularization practices to mitigate overfitting such as l1 & l2 regularization, weight decay, data augmentation, etc. are reported to be as inefficient compared to simple early stopping (Rice et al., 2020). Many studies have attempted to improve the robust generalization gap in AT, and most have generally investigated robust overfitting by considering DNNs as whole. However, DNNs trained on natural images exhibit a common phenomenon: features obtained in the first layers appear to be general and applicable widespread, while features computed by the last layers are dependent on a particular dataset and task (Yosinski et al., 2014). Such behavior of DNNs sparks a question: Do different layers contribute differently to robust overfitting? Intuitively, robust overfitting acts as an unexpected optimization state in adversarial training, and its occurrence may be closely related to the entire network. Nevertheless, the unique effect of different network layers on robust overfitting is still unclear. Without a detailed understanding of the layer-wise mechanism of robust overfitting, it is difficult to completely demystify the exact underlying cause of the robust overfitting phenomenon. In this paper, we provide the first layer-wise diagnosis of robust overfitting. Specifically, instead of considering the network as a whole, we treat the network as a composition of layers and sys- tematically investigate the impact of robust overfitting phenomenon on different layers. To do this, we first fix the parameters for the selected layers, leaving them unoptimized during AT, and then normally optimize other layer parameters. We discovered that robust overfitting is always mitigated in the case where the latter layers are left unoptimized, and applying the same effect to other layers is futile for robust overfitting, suggesting a strong connection between the optimization of the latter layers and the overfitting phenomenon. Based upon the observed effect, we propose a robust adversarial training (RAT) prototype to relieve the issue of robust overfitting. Specifically, RAT works in each mini-batch: it optimizes the front layers as usual, and for the latter layers, it implements additional measures on these parameters to regularize their optimization. It is a general adversarial training prototype, where the front and latter network layers can be separated by some simple test experiments, and the implementation of additional measures to regularize network layer optimization can be versatile. For instance, we designed two representative methods for the realizations of RAT: RATLR and RATWP. They adopt different strategies to hinder weight update, e.g., enlarging the learning rate and weight perturbation, respectively. Extensive experiments show that the proposed RAT prototype effectively eliminates robust overfitting. The contributions of this work are summarized as follows: • We provide the first diagnosis of robust overfitting on different network layers, and find that there is a strong connection between the optimization of the latter layers and the robust overfitting phenomenon. • Based on the observed properties of robust overfitting, we propose the RAT prototype, which adopts additional measures to regularize the optimization of the latter layers and is tailored to prevent robust overfitting. • We design two different realizations of RAT, with extensive experiments on a number of standard benchmarks, verifying its effectiveness. 2 RELATED WORK 2.1 ADVERSARIAL TRAINING Since the discovery of adversarial examples, there have been many defensive methods attempted to improve the DNN’s robustness against such adversaries, such as adversarial training (Madry et al., 2017), defense distillation (Papernot et al., 2016), input denoising (Liao et al., 2018), gradient regularization (Tramèr et al., 2018). So far, adversarial training (Madry et al., 2017) has proven to be the most effective method. Adversarial training comprises two optimization problems: the inner maximization and outer minimization. The first one constructs adversarial examples by maximizing the loss and the second updates the weight by minimizing the loss on adversarial data. Here, fw is the DNN classifier with weight w, and ℓ(·) is the loss function. d(., .) specify the distance between original input data xi and adversarial data x′i, which is usually an lp-norm ball such as the l2 and l∞-norm balls and ϵ is the maximum perturbation allowed. ℓAT(w) = min w ∑ i max d(xi,x′i)≤ϵ ℓ(fw(x ′ i), yi), (1) 2.2 ROBUST GENERALIZATION An interesting characteristic of deep neutral networks (DNNs) is their ability to generalize well in practice (Belkin et al., 2019). For the standard training setting, it is observed that test loss continues to decrease for long periods of training (Nakkiran et al., 2020), thus the common practice is to train DNNs for as long as possible. However, this is no longer the case in adversarial training, which exhibits overfitting behavior the longer the training process (Rice et al., 2020). This phenomenon has been referred to as ”robust overfitting” and has shown strong resistance to standard regularization techniques such as l1, l2 regularization and data augmentation methods. (Rice et al., 2020) Schmidt et al. (2018) theorizes that robust generalization have a large sample complexity, which requires substantially larger dataset. Many subsequent works have empirically validated such claim, such as AT with semi-supervised learning (Carmon et al., 2019; Zhai et al., 2019), robust local feature (Song et al., 2020) and data interpolation (Lee et al., 2020; Chen et al., 2021). (Chen et al., 2020) proposes to combine smoothing the logits via self-training and smoothing the weight via stochastic weight averaging to mitigate robust overfitting. Wu et al. (2020) emphasizes the connection of weight loss landscape and robust generalization gap, and suggests injecting the adversarial perturbations into both inputs and weights during AT to regularize the flatness of weight loss landscape. The intriguing property of robust overfitting has motivated great amount of study and investigation, but current works typically approach the phenomenon considering a DNN as a whole. In contrast, our work treats a DNN as a series of layers and reveals a strong connection between robust overfitting and the optimization of the latter layers, providing a novel perspective into better understanding the phenomenon. 3 INTRIGUING PROPERTIES OF ROBUST OVERFITTING In this section, we first investigate the layer-wise properties of robust overfitting by fixing model parameters in AT (Section 3.1). Based on our observations, we further propose a robust adversarial training (RAT) prototype to eliminate robust overfitting (Section 3.2). Finally, we design two different realizations for RAT to verify the effectiveness of the proposed method (Section 3.3). 3.1 LAYER-WISE ANALYSIS OF ROBUST OVERFITTING Current works usually study the robust overfitting phenomenon considering the network as a single unit. However, features computed by different layers exhibit different properties, such as first-layer features are general and last-layer features are specific (Yosinski et al., 2014). We hypothesize that different network layers have different effects on robust overfitting. To empirically verify the above hypothesis, we deliberately fix the parameters of the selected network layers, leaving them unoptimized during AT and observe the behavior of robust overfitting accordingly. Specifically, we considered ResNet-18 architecture as a composition of 4 main layers, corresponding to 4 Residual blocks. We then train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their parameter fixed. The robust test performance in figure 1(a) shows a consistent pattern. Robust overfitting is mitigated whenever we fix the parameters for layer 4 during AT, while any settings that do not fix the parameters for layer 4 result in a more severe gap between the best accuracy and the accuracy at the last epoch. For example, for settings such as AT-fix-param-[4], AT-fix-param-[1,4], AT-fix-param-[2,4] and AT-fix-param-[3,4], robust overfitting is significantly reduced. On the other hand, for settings such AT-fix-param-[1,2], AT-fix-param-[1,3] and AT-fix-param-[2,3], when we fix the parameters of various set of layers but allow for the optimization of layer 4, robust overfitting still widely exists. For extreme case like AT-fix-param-[1,2,3], where we fix the first three front layers and only allow for the optimization of that last layer 4, the gap between the best accuracy and the last accuracy is still obvious. This clearly indicates that the optimization of the latter layers present a strong correlation to the robust overfitting phenomenon. Note that this relationship can be observed across a variety of datasets, model architectures, and threat models (shown in Appendix A), indicating that it is a general property in adversarial training. In many of these settings, robust overfitting is mitigated at the cost of robust accuracy. For example in AT-fix-param-[3,4], if we leave both layer 3 & 4 unoptimized, robust overfitting will practically disappear, but the peak performance is much worse compared to standard AT. When carefully examining the training performance in these settings shown in figure 1(b), we generally observe that the network capacity to fit adversarial data is strong when we fix the parameters for the front layers, but it gradually gets weaker as we try to fix the latter layers. For instance, AT-fix-param-[1] has the highest train robust accuracy, then comes AT-fix-param[2], AT-fix-param[3] and AT-fix-param[4]; AT-fix-param[1,2,3] has higher training accuracy than AT-fix-param[2,3,4]. This suggests fixing the latter layers’ parameters can regularize the network better compared to fixing the front layers’s parameters. In the subsequent sections, we will introduce methods that specifically regularize the optimization of the latter layers, so as to mitigate robust overfitting without tradeoffs in robustness. We will compare the impact on robust overfitting when applied such methods on the front layers vs the latter layers, further highlighting the importance of the latter layers in relation to robust overfitting. 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[1] AT_fix_param_[2] AT_fix_param_[3] AT_fix_param_[4] 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[1, 2] AT_fix_param_[1, 3] AT_fix_param_[1, 4] 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[2, 3] AT_fix_param_[2, 4] AT_fix_param_[3, 4] 0 50 100 150 2000.1 0.2 0.3 0.4 0.5 AT_standard AT_fix_param_[1, 2, 3] AT_fix_param_[1, 2, 4] AT_fix_param_[1, 3, 4] AT_fix_param_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) Robust Test Performance 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standard AT_fix_param_[1] AT_fix_param_[2] AT_fix_param_[3] AT_fix_param_[4] 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standard AT_fix_param_[1, 2] AT_fix_param_[1, 3] AT_fix_param_[1, 4] 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standardAT_fix_param_[2, 3] AT_fix_param_[2, 4] AT_fix_param_[3, 4] 0 50 100 150 2000.2 0.3 0.4 0.5 0.6 0.7 0.8 AT_standardAT_fix_param_[1, 2, 3] AT_fix_param_[1, 2, 4] AT_fix_param_[1, 3, 4] AT_fix_param_[2, 3, 4] Epochs Tr ai n Ro bu st A cc ur ac y (b) Robust Train Performance Figure 1: The robust train/test performance of adversarial training with different sets of network layers fixed. AT-fix-param[1,2] corresponds to fixing the parameters of layers 1 & 2 during AT 3.2 A PROTOTYPE OF RAT As witnessed in Section 3.1, the optimization of AT in the latter layers is highly correlated to the existence of robust overfitting. To address this, we propose to train the network on adversarial data with some restrictions put onto the optimization of the latter layers, dubbed as Robust Adversarial Training (RAT). RAT adopts additional measures to regularize the optimization of the latter layers, and ensures that robust overfitting will not occur. The RAT prototype is given in Algorithm 1. It runs as follows. We start with a base adversarial training algorithm A. In Line 1-3, The inner maximization pass aims to maximize the loss via creating adversarial examples, and then the outer minimization pass updates the weight by minimizing the loss on adversarial data. Line 4 initiates a loop through all parts of the weight w from the front layers to the latter layers. Line 5-9 then manipulate different parts of the weight based on its layer conditions. If the parts of the weight belong to the front layers (Cfront), they will be kept intact. Otherwise, a weight update scheme S is put onto the parts of the weight corresponding to the latter layers (Clatter). The role of S is to apply some regularization on the latter layers’ weight. Finally, the optimizer O updates the model fw in Line 11. Note that RAT is a general prototype where layer conditions Cfront, Clatter and weight adjustment strategy S can be versatile. For example, based on the observations in Section 3.1, we treat the Res-Net architecture as a composition of 4 main layers, corresponding to 4 residual blocks, where Cfront indicates layer 1 & 2 and Clatter indicates layer 3 & 4. S can also represent various strategies that serves to regularize the optimization of the latter layers. In the section below, we will propose two different strategies S in the implementations of RAT to demonstrate RAT’s effectiveness. 3.3 TWO REALIZATIONS OF RAT In this section, we will propose two different methods to put certain restrictions on the optimization of selected parts of the network, and then investigate the robust overfitting behavior upon applying such method to the front layers vs the latter layers. These methods showcase a clear relation between the optimization of the latter layers and robust generalization gap. RAT through enlarging learning rate. In standard AT, the sudden increases in robust test performance appears to be closely related to the drops in the scheduled learning rate decay. We hypothesize Algorithm 1 RAT-prototype (in a mini-batch). Require: base adversarial training algorithm A, optimizer O, network fw, model parameter w = {w1, w2, ..., wn}, training dataD = {(xi, yi)}, mini-batch B, front and latter layer conditions Cfront and Clatter for fw, gradient adjustment strategy S 1: Sample a mini-batch B = {(xi, yi)} from D 2: B′ = A.inner maximization(fw,B) 3: ∇w ← A.outer minimization(fw, ℓB′) 4: for i = 1, ..., n do 5: if Cfront(wi) then 6: ∇wi ← ∇wi 7: else if Clatter(wi) then 8: ∇wi ← S(fw,B′,∇wi) # adjust gradient 9: end if 10: end for 11: O.step(∇w) that training AT without learning rate decays is sub-optimal, which can regularize the learning process of adversarial training. Comparison of the train/test performance between standard AT and AT without learning rate decay (AT-fix-lr-[1,2,3,4]) are shown in figure 2(b). Training performance of standard AT accelerates quickly right after the first learning rate drop, expanding the generalization gap with further training, whereas for AT without learning rate decay, training performance increases slowly and maintain a stable generalization gap. This suggests that AT optimized without learning rate decay has less capacity to fit adversarial data, and thus provides the regularization needed to relieve robust overfitting. As our previous analysis suggests that the optimization of the latter layers is more important in mitigating robust overfitting, we propose using a fixed learning rate = 0.1 for optimizing the latter parts of the network while applying the piecewise decay learning rate for the former parts to close the robust generalization gap. We refer to this approach as a realization of RAT, namely RATLR. Compared to standard AT, RATLR essentially enlarge the weight update step ∇wi along the latter parts of the gradients by 10 at the first learning rate decay and 100 at the second decay. ∇wi = η∇wi , (2) where η is the amplification coefficient. To demonstrate the effectiveness of RATLR, we train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their learning rate fixed to 0.1 while maintaining the piece-wise learning rate schedule for other layers. Figure 2(a) validate our proposition. Robust overfitting is relieved for all settings that target layers that include layer 4 (AT-fix-lr-[4], AT-fix-lr-[1,4], AT-fix-lr-[2,4], etc.) while any settings that fix the learning rate of layers that exclude layer 4 do not reduce robust overfitting. Furthermore, all settings that fix the learning rate for both layer 3 & 4, including AT-fix-lr-[3,4], AT-fix-lr-[1,3,4], AT-fix-lr-[2,3,4] AT-fix-lr-[1,2,3,4] completely eliminate robust overfitting. The observations verify that regularizing the optimization of the latter layers by optimizing those layers without learning rate decays can prevent robust overfitting from occurring. An important observation is that RATLR (AT-fix-lr-[3,4]) can both overcome robust overfitting and achieve better robust test performance compared to the network using a fixed learning rate for all layers (AT-fix-lr-[1,2,3,4]). Examining the training performance between these two settings in figure 2(c), we find that RATLR exhibits a rapid rise in both robust and standard training performance immediately after the first learning rate decay similar to standard AT. The training performance of RATLR is able to benefit from the learning rate decay occurring at layer 1 & 2, making a notable improvement compared to AT-fix-lr-[1,2,3,4]. By training layers 3 & 4 without learning rate decays, we specifically put some restrictions on the optimization of only the latter parts of the network heavily responsible for robust overfitting, which can relieve robust overfitting without sacrificing too much performance. The experiment results provide another indication that the latter layers have stronger connections to robust overfitting than the front layers do, and regularizing the optimization of the latter layers from the perspective of learning rate can effectively solve robust overfitting. 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) Robust test performance of all settings RAT through adversarial weight pertubation. We continue to study the impact of different network layers to robust overfitting phenomenon from the perspective of adversarial weight perturbation (AWP). Wu et al. (2020) proposes AWP as a method to explicitly flatten weight loss landscape, by introducing adversarial perturbations into both inputs and weights during AT: min w max v∈V ∑ i max d(xi,x′i)≤ϵ ℓ(fw+v(x ′ i), yi), (3) where v is the adversarial weight perturbation generated by maximizing the classification loss: v = ∇w ∑ i ℓi. (4) As AWP keeps injecting the worst-case perturbations on weight during training, it could also be viewed as a means to regularize the optimization of AT. In fact, the training of AWP exhibits a negative robust generalization gap, where robust training accuracy is in short of robust testing accuracy by a large margin, shown in figure 3(c). This indicates AWP put significant restrictions to the optimization of AT, introducing huge trade-offs to training performance. As our previous analysis suggests a strong correlation between robust overfitting and the optimization of the latter layers, we argue that the capacity to mitigate robust overfitting from AWP is mostly thanks to the perturbations occurring at latter layers’ weight. As such, we propose to specifically apply AWP to the latter half of the network, and refer to this method as RATWP. In essence, RATWP compute the adversarial weight perturbation vi under the layer condition Clatter(wi), so that only the parts of the weight along the latter half of the network are perturbed. min w=[w1,...,wi,...,wn] max v=[0,...,vi,...0]∈V ∑ i max d(xi,x′i)≤ϵ ℓ(fw+v(x ′ i), yi), (5) vi = ∇wi ∑ i ℓi. (6) To prove the effectiveness of RATWP , we train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their weight locally 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 50 100 150 2000.350 0.375 0.400 0.425 0.450 0.475 0.500 0.525 0.550 0.575 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) Robust test performance of all settings perturbed using AWP. As seen from figure 3(a), There are only 3 settings that can overcome robust overfitting, namely AT-AWP-[3,4], AT-AWP-[1,3,4] and AT-AWP-[2,3,4]. These settings share one key similarity: both layer 3&4 have their weight adversarially perturbed during AT. Simply applying AWP to any set of layers that exclude layers 3&4 is not sufficient to eliminate robust overfitting. This shows that AWP is effective in solving robust overfitting only when applied to both layer 3 and layer 4. Even when AWP is applied to the first 3 former layers out of 4 layers (AT-awp[1,2,3]), robust overfitting still widely exists. In another word, it is essential for the adversarial weight perturbations to occur at the latter part of the network in order to mitigate robust overfitting. To examine this phenomenon in detail, we compare the training performance of AWP applied to front layers (represented by AT-AWP-[1,2,3]) vs AWP applied to latter layers (represented by ATAWP-[3,4]), shown in figure 3(b). AWP applied in the front layers have a much better training performance than AWP applied in the latter layers. Furthermore, AWP applied to front layers reveals a positive robust generalization gap (training accuracy > testing accuracy) shortly after the first drop in learning rate, which continues to widen with further training. Conversely, AWP applied in the latter layers exhibits a negative robust generalization gap throughout most of the training, only converging to 0 after the second drop in learning rate. These differences demonstrate that worst-case perturbations, when injected into the latter layers’ weights, have a more powerful impact in regularizing the optimization of AT. Consistent with our previous findings, AWP applied to the latter layers can be considered as an approach to regularize the optimization of AT in those layers, which successfully mitigates robust overfitting. This finding supports our analysis thus far, further demonstrating that regularizing the optimization of the latter layers is key to improving the robust generalization. 4 EXPERIMENT In this section, we conduct extensive experiments to verify the effectiveness of RATLR and RATWP. Details of the experiment settings and performance evaluation are introduced below. 4.1 EXPERIMENTAL SETUP We conduct extensive experiments on two realizations of RAT across three benchmark datasets (CIFAR10 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011) and CIFAR100 (Krizhevsky et al., 2009)) and two threat models (L∞ and L2). We use PreAct ResNet-18 He et al. (2016) and Wide ResNet-34-10 following the same hyperparameter settings for AT in Rice et al. (2020): for L∞ threat model, ϵ = 8/255, step size is 1/255 for SVHN, and 2/255 for CIFAR-10 and CIFAR-100; for L2 threat model, ϵ = 128/255, step size is 15/255 for all datasets. For training, all models are trained under 10-step PGD (PGD-10) attack for 200 epochs using SGD with momentum 0.9, weight decay 5 × 10−4, and a piecewise learning rate schedule with an initial learning rate of 0.1. RAT models are decomposed into a series of 4 main layers, corresponding to 4 residual blocks of the ResNet architecture. For RATLR, learning rate for layer 3&4 are set to a fixed value of 0.1. For RATWP leveraging AWP in layer 3&4, γ = 1 × 10−2. For testing, the robustness accuracy is evaluated under two different adversarial attacks, including 20-step PGD (PGD-20) and Auto Attack (AA) Croce & Hein (2020b). Auto Attack is considered the most reliable robustness evaluation to date, which is an ensemble of complementary attacks, consisting of three white-box attacks (APGD-CE (Croce & Hein, 2020b), APGD-DLR (Croce & Hein, 2020b), and FAB (Croce & Hein, 2020a)) and a black-box attack (Square Attack (Andriushchenko et al., 2020)) 4.2 PERFORMANCE EVALUATION In this section, we present the experimental results of RATLR and RATWP across three benchmark datasets CIFAR10 Results. The evaluation results on CIFAR10 dataset are summarized in Table 1, where “Best” is the highest test robustness achieved during training; “Last” is the test robustness at the last epoch checkpoint; “Diff” denotes the robust accuracy gap between the “Best” & “Last”. It is observed that RATWP generally achieves the best robust performance compared to RATLR & standard AT. Regardless, both RATLR and RATWP tighten the robustness gaps by a significant margin, indicating they can effectively suppress robust overfitting. CIFAR100 Results. We also show the results on CIFAR100 dataset in Table 2. We observe similar performance like CIFAR10, where both RATLR and RATWP is able to significantly reduce the robustness gaps. For robustness improvement, RATWP stands out to be the leading method. The results further verify the effectiveness of the proposed approach. SVHN Results. Finally, we summarize the results on the SVHN dataset in Table 3, where robustness gap are also narrowed down to a small margin by RATWP. SVHN dataset is a special case where RATLR strategy does not improve robust overfitting. Unlike CIFAR10 and CIFAR100, learning rate decay in SVHN’s training does not have much connection to the sudden increases in robust test performance or the prevalence of robust overfitting, and hence makes RATLR ineffective. Other than this, The improvement in robust generalization gaps can be witnessed in all cases, demonstrating the proposed approachs are generic and can be applied widely. 5 CONCLUSION In this paper, we investigate the effects of different network layers on robust overfitting and identify that robust overfitting is mainly driven by the optimization occurred at the latter layers. Following this, we propose a robust adversarial training (RAT) prototype to specifically hinder the optimization of the latter layers in the process of training adversarial network. The approach prevents the model from overfitting the latter parts of the network, which effectively eliminate robust overfitting of the network as a whole. We then further demonstrate two implementations of RAT: one locally uses a fixed learning rate for the latter layers and the other utilize adversarial weight perturbation for the latter layers. Extensive experiments show the effectiveness of both approaches, suggesting RAT is generic and can be applied across different network architectures, threat models and benchmark datasets to solve robust overfitting. A MORE EVIDENCES FOR THE LAYER-WISE PROPERTIES OF ROBUST OVERFITTING In this section, we provide more empirical experiments to showcase the layer-wise properties of robust overfitting across different datasets, model architectures and threat models. Specifically, we use two strategies mentioned in Section 3.3 to put restriction on the optimization of different network layers. We can always observe that there is no robust overfitting when we regularize the optimization of layers 3 and 4 (the latter layers), while robust overfitting is prevalent for other settings. These evidences further highlight the strong relation between robust overfitting and the optimization of the latter layers. A.1 EVIDENCES ACROSS DATASETS We show that the layer-wise properties of robust overfitting is universal across datasets on CIFAR100 and SVHN. We adversarially train PreAct ResNet-18 under l∞ threat model on different datasets with the same settings as Section 3.3. The results are shown in Figure 4 and 5. Note that for SVHN, regularization strategy utilizing a fixed learning rate (RATLR) for does not improve robust overfitting (Figure 4). Unlike CIFAR10 and CIFAR100, SVHN’s training overfits way before the first learning rate decay. Also, learning rate decay in SVHN’s training does not have any relation to the sudden increases in robust test performance or the appearance of robust overfitting. Hence, SVHN dataset is a special case where RATLR does not apply. For all other cases, robust overfitting is effectively eliminated by regularizing the optimization of layers 3 and 4. A.2 EVIDENCES ACROSS THREAT MODELS We further demonstrate that the generality of layer-wise properties of robust overfitting by conducting experiments under l2 threat model across datasets. The settings are the same as Section 3.3. The results are shown in Figure 6 and 7. Under l2 threat model, except for SVHN dataset where regularization strategy utilizing a fixed learning rate (RATLR) does not apply, robust overfitting is effectively eliminated by regularizing the optimization of layers 3 and 4. 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 50 100 150 200 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) CIFAR10 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 50 100 150 2000.250 0.275 0.300 0.325 0.350 0.375 0.400 0.425 0.450 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (b) CIFAR100 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[1, 2] AT_fix_lr_[1, 3] AT_fix_lr_[1, 4] 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[2, 3] AT_fix_lr_[2, 4] AT_fix_lr_[3, 4] 0 25 50 75 100 125 1500.500 0.525 0.550 0.575 0.600 0.625 0.650 0.675 0.700 AT_standard AT_fix_lr_[1, 2, 3] AT_fix_lr_[1, 2, 4] AT_fix_lr_[1, 3, 4] AT_fix_lr_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (c) SVHN Figure 6: Robust test performance of adversarial training using a fixed learning rate for different sets of network layers, across datasets (CIFAR-10, CIFAR-100 and SVHN) under l2 threat 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 50 100 150 2000.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (a) CIFAR10 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 50 100 150 200 0.25 0.30 0.35 0.40 0.45 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (b) CIFAR100 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1] AT_awp_[2] AT_awp_[3] AT_awp_[4] 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2] AT_awp_[1, 3] AT_awp_[1, 4] 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[2, 3] AT_awp_[2, 4] AT_awp_[3, 4] 0 25 50 75 100 125 1500.50 0.55 0.60 0.65 0.70 0.75 AT_awp_standard AT_awp_[1, 2, 3] AT_awp_[1, 2, 4] AT_awp_[1, 3, 4] AT_awp_[2, 3, 4] Epochs Te st R ob us t A cc ur ac y (c) SVHN Figure 7: Robust test performance of adversarial training applying AWP for different sets of network layers, across datasets (CIFAR10, CIFAR-100 and SVHN) under l2 threat
1. What is the focus of the paper regarding adversarial training? 2. What are the strengths of the proposed methods, particularly RAT? 3. What are the weaknesses of the paper, especially in terms of experiment design and comparison? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor questions or concerns regarding the paper's approach or presentation?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper investigates robust overfitting, an important phenomenon that occurs in the adversarial training. In contrast to previous approach, the paper does not regard the neural network as a black-box model but divide a DNN into a series of layers and investigate the effect of different network layers on robust overfitting. It finds that the latter layer has larger impact on the robust overfitting. To this end, the paper proposes two regularization methods on the latter layers of the networks, namely R A T L R and R A T A W P , and shows that two these methods help to mitigate robust overfitting. Strengths And Weaknesses Strength The paper studies an important problem of robust overfitting in the adversarial training. It is highly motivated to divide the DNN into several layers instead of regarding it as a black-box model. The experiments are intensive, showing that the proposed methods, RAT, help to mitigate robust overfitting. Weakness My major concern is about the number of parameters that the paper fix in the experiments of Section 3. Different blocks have different number of parameters. Therefore, fixing different blocks will result in the models with different number of effective parameters so that they will have different capacity. If the model that fixs the latter part has smaller number of parameters than the model that fixs the former part, it cannot prove the latter part has larger impact on the robust overfitting. Instead, the model capacity would be more important. In addition, adding more analyses beyond the architecture of ResNet would help to strengthen the conclusion. As mentioned by the paper, AWP helps to mitigate robust overfitting. What is the performance of applying AWP to all layers? If this one has better performance than R A T A W P , then the proposed method seems to have small novelty. In the experiments of Section 4, as AWP is shown to improve the performance. R A T A W P should not only be compared with AT with AWP as well. In the experiment of Section 4, the final accuracy of R A T L R is lower than AT in many cases. Therefore, the empircal application of this method seems very limited. Minors How are the last fully connected layers treated in the experiments? Are they fixed? Clarity, Quality, Novelty And Reproducibility The paper is clearly written and easy to read. Although the code is not provided, I think the algorithm is relatively simple and the experiments are relative standard. Therefore, I think the results may be easy to reproduce.
ICLR
Title The Challenges of Exploration for Offline Reinforcement Learning Abstract Offline Reinforcement Learning (ORL) enables us to separately study the two interlinked processes of reinforcement learning: collecting informative experience and inferring optimal behaviour. The second step has been widely studied in the offline setting, but just as critical to data-efficient RL is the collection of informative data. The task-agnostic setting for data collection, where the task is not known a priori, is of particular interest due to the possibility of collecting a single dataset and using it to solve several downstream tasks as they arise. We investigate this setting via curiosity-based intrinsic motivation, a family of exploration methods which encourage the agent to explore those states or transitions it has not yet learned to model. With Explore2Offline, we propose to evaluate the quality of collected data by transferring the collected data and inferring policies with reward relabelling and standard offline RL algorithms. We evaluate a wide variety of data collection strategies, including a new exploration agent, Intrinsic Model Predictive Control (IMPC), using this scheme and demonstrate their performance on various tasks. We use this decoupled framework to strengthen intuitions about exploration and the data prerequisites for effective offline RL. 1 INTRODUCTION The field of offline reinforcement learning (ORL) is growing quickly, motivated by its promise to use previously-collected datasets to produce new high-quality policies. It enables the disentangling of collection and inference processes underlying effective RL (Riedmiller et al., 2021). To date, the majority of research in the offline RL setting has focused on the inference side - the extraction of a performant policy given a dataset, but just as crucial is the development of the dataset itself. While challenges of the inference step are increasingly well investigated (Levine et al., 2020; Agarwal et al., 2020), we instead investigate the collection step. For evaluation, we investigate correlations between the properties of collected data and final performance, how much data is necessary, and the impact of different collection strategies. Whereas most existing benchmarks for ORL (Fu et al., 2020; Gulcehre et al., 2020) focus on the single-task setting with the task known a priori, we evaluate the potential of task-agnostic exploration methods to collect datasets for previously-unknown tasks. Task-agnostic data is an exciting avenue to pursue to illuminate potential tasks of interest in a space via unsupervised learning. In this setting, we transfer information from the unsupervised pretraining phase not via the policy (Yarats et al., 2021) but via the collected data. Historically the question of how to act - and therefore collect data - in RL has been studied through the exploration-exploitation trade-off, which amounts to a balance of an agent’s goals in solving a task immediately versus collecting data to perform better in the future. Task-agnostic exploration expands this well-studied direction towards how to explore in the absence of knowledge about current or future agent goals (Dasagi et al., 2019). In this work, we particularly focus on intrinsic motivation (Oudeyer & Kaplan, 2009), which explores novel states based on rewards derived from the agent’s internal information. These intrinsic rewards can take many forms, such as curiosity-based methods that learning a world model (Burda et al., 2018b; Pathak et al., 2017; Shyam et al., 2019), data-based methods that optimize statistical properties of the agent’s experience (Yarats et al., 2021), or competence-based metrics that extract skills (Eysenbach et al., 2018). In particular, we perform a wide study of data collected via curiosity-based exploration methods, similar to ExORL (Yarats et al., 2022). In addition, we introduce a novel method for effectively combining curiosity-based rewards with model predictive control. In Explore2Offline, we use offline RL as a mechanism for evaluating exploration performance of these curiosity-based models, which separates the fundamental feedback loop key to RL in order to disentangle questions of collection and inference Riedmiller et al. (2021) as displayed in Fig. 1. With this methodology, our paper has a series of contributions for understanding properties and applications of data collected by curiosity-based agents. Contribution 1: We propose Explore2Offline to combine offline RL and reward relabelling for transferring information gained in the data from task-agnostic exploration to downstream tasks. Our results showcase how experiences from intrinsic exploration can solve many tasks, partially reaching similar performance to state-of-the-art online RL data collection. Contribution 2: We propose Intrinsic Model Predictive Control (IMPC) which combines a learned dynamics model and a curiosity approach to enable online planning for exploration to minimize the potential of stale intrinsic rewards. A large sweep over existing and new methods shows where task-agnostic exploration succeeds and where it fails. Contribution 3: By investigating multi-task downstream learning, we highlight a further strength of task-agnostic data collection where each datapoint can be assigned multiple rewards in hindsight. 2 RELATED WORKS 2.1 CURIOSITY-DRIVEN EXPLORATION Intrinsic exploration is a well studied direction in reinforcement learning with the goal of enabling agents to generate compelling behavior in any environment by having an internal reward representation. Curiosity-driven learning uses learned models to reward agents that reach states with high modelling error or uncertainty. Many recent works use the prediction error of a learned neural network model to reward agents’ that see new states Burda et al. (2018b); Pathak et al. (2017). Often, the intrinsic curiosity agents are trained with on-policy RL algorithms such as Proximal Policy Optimization (PPO) to maintain recent reward labels for visited states. Burda et al. (2018a) did a wide study on different intrinsic reward models, focusing on pixel-based learning. Instead, we use offpolicy learning and re-label the intrinsic rewards associated with a tuple when learning the policy. Other strategies for using learned dynamics models to explore is to reward agents based on the variance of the predictions Pathak et al.; Sekar et al. or the value function (Lowrey et al., 2018). We build on recent advancements in intrinsic curiosity with the Intrinsic Model Predictive Control agent that has two new properties: online planning of states to explore and using a separate reward model from the dynamics model used for control. 2.2 UNSUPERVISED PRETRAINING IN RL Recent works have proposed a two-phase RL setting consisting of a long “pretraining” phase in a version of the environment without rewards, and a sample-limited “task learning” phase with visible rewards (Schwarzer et al., 2021). In this setting the agent attempts to learn task-agnostic information about the environment in the first phase, then rapidly re-explore to find rewards and produce a policy specialized to the task. Various methods have addressed this setting with diverse policy ensembles, such as a policy conditioned on a random variable whose marginal state distribution exhibits high coverage (Eysenbach et al., 2018) or finding a set of policies with diverse successor features (Hansen et al., 2020). Similarly Liu & Abbeel (2021) learn a single policy which approximately maximizes the estimated entropy of its state distribution in a contrastive representation space. Another strategy collects diverse data during the pretraining phase and uses it to learn representations and exploration rewards that are beneficial for downstream tasks (Yarats et al., 2021). A central focus of all of these methods is delivering agents which can explore efficiently at task learning time. While the pretraining and data collection phases of unsupervised RL pretraining and Explore2Offline (respectively) are similar, in Explore2Offline the task learning phase is performed on relabeled offline transitions, enabling us to use information acquired throughout training and not only the final policy. 2.3 EVALUATING TASK-AGNOSTIC EXPLORATION While there are many proposed methods for exploration, evaluation of exploration methods is varied. Recent work in exploration have proposed a variety of evaluation metrics, including fine-tuning of agents post-exploration (Laskin et al., 2021), sample-efficiency and peak performance of online RL (Whitney et al., 2021), zero-shot transfer of learned dynamics models (Sekar et al.), multienvironment transfer (Parisi et al., 2021), and skill extraction to a separate curriculum (Groth et al., 2021). Task-agnostic exploration has been investigated via random data (Cabi et al., 2019) and intrinsic motivation as a source of data for offline RL (Dasagi et al., 2019; Endrawis et al., 2021), but has only been evaluated in the single-task setting and limited by current ORL implementations. Offline RL is a compelling candidate for evaluating exploration data because of its emerging ability to generalize across experiences in addition to imitating useful behaviors. Complementary work echoes the importance of data collection for offline RL from the perspective of unsupervised RL (Yarats et al., 2022), while our work focuses more on the relationship between the exploration challenges of an environment and how a new exploration algorithm could address current data generation shortcomings. 2.4 OFFLINE REINFORCEMENT LEARNING With Offline Reinforcement Learning, we decouple the learning mechanism from exploration by training agents from fixed datasets. Various recent methods have demonstrated strong performance in the offline setting (Wang et al., 2020; Kumar et al., 2020; Peng et al., 2019; Fujimoto & Gu, 2021). In Explore2Offline we use a variant of Critic Regularised Regression (Wang et al., 2020). Many datasets and benchmarks such as D4RL (Fu et al., 2020) and RL Unplugged (Gulcehre et al., 2020) have been proposed to investigate different approaches. The use of offline datasets has even been extended to improve online RL performance (Nair et al., 2020). Our goal is related; instead of investigating multiple offline RL approaches, we investigate mechanisms to generate datasets for downstream tasks. Analysis over the desired state-action and reward distributions for ORL are studied, but little work is done to address how best to generate this data (Schweighofer et al., 2021). On the theory side, recent works have investigated the Explore2Offline setting, which they call “reward-free exploration” (Jin et al., 2020; Kaufmann et al., 2021). These works study algorithms which guarantee the discovery of ε-optimal policies after polynomially many episodes of taskagnostic data collection, though the algorithms they study are not straightforwardly applicable to the high-dimensional deep RL setting with function approximation. 2.5 REWARD RELABELLING By using off-policy or even offline learning, data generated for one task and reward can be applied to learn a variety of potential tasks. In off-policy RL, we can identify useful rewards for an existing trajectory based on later states from the same trajectories (Andrychowicz et al., 2017), uncertainty over a trajectory (Nasiriany et al., 2021), distribution of goals (Nasiriany et al., 2021), related tasks (Riedmiller et al., 2021; Wulfmeier et al., 2019), agent-intrinsic tasks (Wulfmeier et al., 2021), via inverse reinforcement learning (Eysenbach et al., 2020) as well as other mechanisms (Li et al., 2020). In the context of pure offline RL, we can go one step further as we are not required to find the optimal tasks for stored trajectory data. In this setting, data can be used for learning with a massive set of rewards such as all states visited along stored trajectories (Chebotar et al., 2021). We will evaluate our approaches for exploration across downstream tasks and relabel data with all possible tasks. 3 METHODOLOGY 3.1 REINFORCEMENT LEARNING Reinforcement Learning (RL) is a framework where an agent interacts with an environment to solve a task by trial and error. The objective of an agent is often to maximize the cumulative future reward on a predetermined task, E[∑∞τ=0 γτrτ ∣s0 = st].We utilize the setting where an agent’s interactions with an environment are modeled as a Markov Decision Process (MDP). A MDP is defined by a state of the environment s, an action a that is taken by an agent according to a policy πθ(st), a transition function p(st+1∣st, at) governing the next state distribution, and a discount factor γ ∈ [0, 1] weighting future rewards. With a transition in dynamics, the agent receives a reward rt from the environment and stores the SARS data in a dataset D ∶ {sk, ak, rk, sk+1}. Alternatively to this environment-centric reward formulation is the concept of intrinsic rewards, where the agent maximizes an internal notion of reward in an task-agnostic manner to collect data. 3.2 CURIOSITY-DRIVEN EXPLORATION Existing Methods Reaching new, valuable areas of the state-space is crucial to solving sparse tasks with RL. One method to balance attaining new experiences, exploration, with the goal of solving a task, exploitation, is using curiosity models. Curiosity models are a subset of intrinsic rewards an agent can use to explore by creating a reward signal, rint.. These models encourage exploration by optimizing the signal from a learned model that corresponds to a modeling error or uncertainty, which often occurs at states that have not been visited frequently. We deploy a series of intrinsic models: the simplest, Next Step Model Error maximizes the error of a learned one-step model rint. = ∥ŝt+1 − st+1∥2, Random Network Distillation (RND) maximizes the distance of a learned state encoding to that of a static encoding rint. = ∥η̂(st) − η(s)∥2 (Burda et al., 2018b), the Intrinsic Curiosity Module (ICM) maximizes the error on a forward dynamics model learned in the latent space, φ(s), of a inverse dynamics model rint. = ∥φ̂t − φ∥2 (Pathak et al., 2017), and Dynamics Disagreement (DD) maximizes the variance of an ensemble of learned one-step dynamics models rint. = σ(ŝit+1) (Pathak et al.). Intrinsic Model Predictive Control Model Predictive Control (MPC) on a learned model has been used for control across a variety of simulated an real world settings (Wieber; Camacho & Alba, 2013), including recently with modelbased reinforcement learning (MBRL) algorithms (Williams et al., 2017; Chua et al., 2018; Lambert et al., 2019). MBRL using MPC is an iterative loop of learning a predictive model of environment dynamics fθ(⋅) (e.g. a one-step transition model), and acting in the environment through the use of model based planning with the learned model. This planning step usually involves optimizing for a sequence of actions that maximizes the expected future reward (Eqn. 1), for example, via sample based optimization; the MPC loop executes the first action of this sequence followed by replanning. a = argmax at∶t+τ τ ∑ t r(ŝt, at), s.t. ŝt+1 = fθ(st, at). (1) The reward function r defines the behavior of the planned sequence of actions in model-based planning. For task-specific RL this can be the task reward function (known or estimated from data). Instead, our Intrinsic MPC (IMPC) agent uses a curiosity based reward for planning in order to encourage task agnostic exploration by reasoning about what states are currently interesting and novel. The goal of planning being used to visit new interesting states, rather than states that were recorded with high intrinsic reward in the replay memory is shown in Fig. 2. This evaluation occurs by sampling action sequences, unrolling them using the forward dynamics model, scoring the rollouts with the learned curiosity model, and finally taking the first action of the sequence with the highest score. Given that this evaluation happens with access to only imagined states and a proposed action, only a subset of intrinsic models can be used with planning, as summarized in Tab. 1. We primarily evaluate IMPC using the RND curiosity model, but we also present results with the DD model. We use the Cross Entropy Method (CEM) (De Boer et al.), a sample based optimization procedure for planning. Inspired by prior work Byravan et al. (2021) we use a policy to generate action candidates for the planner; this policy is trained using the Maximum a-posteriori Policy Optimization (MPO) algorithm (Abdolmaleki et al., 2018) from data generated by the MPC actor. Additionally, to amortize the cost of planning we interleave planning with directly executing actions sampled from the learned policy. This is achieved by specifying a planning probability 0 < ρ < 1; at each step in the actor loop we choose either to plan or execute the policy action according to ρ (we use ρ = 0.9). Additional algorithmic details are included in Appendix A.1. 3.3 OFFLINE REINFORCEMENT LEARNING To train an agent offline from task-agnostic exploration data, we determine rewards from observations in hindsight. While the approach relies on the ability to compute rewards based on observations, a large set of tasks can be described in this manner (Li et al., 2020). In comparison to online learning, we have the benefit that we do not need to determine for which task data is most informative given a commensurate number of tasks. Instead we can relabel the data with all possible rewards to maximise its utility. Given the new task rewards, we replace the intrinsic reward in our trajectory data and apply a variant of a recent state-of-the-art offline RL algorithm, Critic Regularised Regression (CRR) Wang et al. (2020). While we apply CRR for our investigation, the overall method is general in that it could be applied with other approaches. We iteratively update critic and actor optimising their respective losses following Equation 2 and 3. LQ = EB[D(Qθ(st, at), (rt + γEat∼π(st+1)Qθ′(st+1, at)))], (2) Since we use a distributional categorical critic, we apply the divergence measure D instead of the squared Euclidean loss, following (Bellemare et al.). With f = ReLU(Â(st, at)) and  the advantage estimator via Qπ(st, at) − 1/m∑Ni=1Qπ(st, ai), the policy is optimized as: π(at∣st) = argmax π E(st,at)∼B[f(Qπ, π, st, at) log π(a∣s)]. (3) 4 EXPERIMENTS 4.1 EXPERIMENTAL SETTING Exploration Agents In this work we benchmark a variety of task-agnostic exploration agents. We classify agents as reactive, selecting actions with a policy, or planning, selecting actions by optimizing over trajectories. The reactive agents are trained with Maximum Apriori Optimization (MPO) (Abdolmaleki et al., 2018) and include the curiosity models RND, DD, ICM, and NS. We compare these to IMPC with DD and RND optimized with CEM and a fixed random agent that samples from the action distribution. Some figures will include a label of MPO, which corresponds to the benchmark of the task-aware RL agents, which provides interesting context to Explore2Offline. The online agent represents the state-of-the-art performance when the task is known a priori – matching it without an environment reward function would highlight the potential of Explore2Offline. Environments In this work, we investigate the exploration performance of a variety of DeepMind Control Suite tasks (Tassa et al., 2018). We evaluate 4 domains (Ball-in-Cup, Finger, Reacher, and Walker) including 14 tasks with a variety of state-action sizes, with further details included in the Appendix. In order to include more challenging environments for task-agnostic exploration, we use modifications proposed by Whitney et al. (2021), Explore Suite, which include constrained initial states and sparser reward functions. 4.2 TASK-AGNOSTIC DATA COLLECTION We compare the amount of task-reward received per-episode across a variety of agents and tasks, which is an intuitive metric for exploration agent performance but can only act as a proxy for exploration. To evaluate taskagnostic reward, the exploration agents are run without access to the environment rewards, with the rewards are relabelled after. The distributions of normalized reward achieved during the 5000 training episodes for four example tasks are documented for all of the exploration agents in Fig. 3. Crucially, the Explore Suite variants are challenging for the random agent across its lifetime, where all of the examples shown have median episode-reward of 0. There is a wide diversity in the agent-task pairings by proxy of measuring experienced reward, showcasing the large potential of future work to better understand this area. To give an overall view of data collection for each agent, we show in Fig. 4 (left) the Control and Explore Suite average collected reward across all tasks. The median, 90th and 10th percentiles are Finger, Turn-Easy Finger, Turn-Easy Explore Walker, Walk Walker, Walk Explore shown for each agent, combined across tasks. Here we also show what reward a task-aware agent (MPO) will collect in its lifetime. It is included to indicate an upper target for exploration, rather than a competitive baseline. An important artifact here is the random agent’s flatness of reward achieved across time – the other exploration agents show an increase in median reward as the dataset size grows (especially on Explore Suite). This change in reward distribution across dataset size indicates a diversity of behaviors in the intrinsic exploration agents, while the random agent receives reward from the same distribution repeatedly. This can be seen as a curiosity-based agent such as IMPC with RND achieves varied rewards and the random agent has a repeated reward distribution. While the collected reward can be a indicator of the usefulness of an exploration agent, it is not directly transferable to a task-focused policy capable of solving tasks. Without careful environment design, task-agnostic agents focusing on novel states will cover the entire state-space regardless of the predefined downstream task. 4.3 EVALUATING OFFLINE RL ON EXPLORATION DATA We study the performance of intrinsic agents via a SOTA offline reinforcement learning algorithm, Critic Regularized Regression (CRR) (Wang et al., 2020). For each agent, 3 policies were trained on dataset sizes within the range of 2 × 103 to 5 × 106 environment steps. The mean of the performance across all explore and control suite tasks with confidence intervals is shown in Fig. 4 (right). Due to the width of this study, we were only able to evaluate the offline RL performance on 3 seeds per each collected dataset – the median, max and min across these three policies from CRR are shown in each subplot. Here, we see three core findings that we will continue to detail: 1) for dataset sizes < 1 × 105, there is little benefit to task-aware learning for data collection and the random agent is a strong baseline versus other intrinsic model-based agents; 2) on larger datasets, the task-aware RL method, MPO, jumps ahead of the explorers, but the exploration agents all continue to improve in offline RL performance with more data; 3) the novel IMPC approach with RND, along with existing methods of MPO with RND or DD, performs best on average with the largest datasets. To showcase which tasks are solved by the Explore2Offline framework, we document the median performance of the exploration agents with the full 5 × 106 steps training set size versus the taskaware MPO in Tab. 2. Explore2Offline with these agents solve all but the Walker domain tasks, with further results included in the Appendix. To highlight why dataset size and observed reward are such powerful indicators of ORL performance, we show in Fig. 5 the correlation for the Finger Turn and Walker Walk tasks of the cumulative reward in a dataset versus the offline RL performance for that dataset. There is a clear trend of more reward resulting in a better policy for the tasks paired with the environment. Fig. 7 uses Spearman’s rank correlation to visualise how dataset size is a considerably better predictor of performance than any reward statistics including mean, sum or 80% quantile. This further emphasises the importance to transfer via increasingly large datasets instead instead of the final exploration policy. In the next section we will evaluate how re-using data from task-agnostic exploration can enable multi-task performance. 4.4 EVALUATING TASK-AGNOSTIC DATA FOR MULTITASK RL A key motivation for collecting task-agnostic data is its applicability when there are a variety of downstream tasks of interest, including those which might not be known at data collection time. In the ideal case, task-agnostic exploration could collect a single dataset, then offline RL could consume that data (along with relabeled rewards) to solve arbitrary tasks. We evaluate the quality of datasets collected by various exploration agents for use with multiple downstream tasks. For this evaluation we collected one dataset for the Pointmass and Finger environments using each of seven exploration agents, then trained policies for downstream tasks by relabeling the data with different reward functions. Each of the tasks is defined by a sparse +1 reward corresponding to a particular goal state. As a baseline, the Online agent collects experience using a standard online RL algorithm as it learns to solve a “Training” task, while all of the other data collection agents are task agnostic. The datasets collected by each agent are evaluated with offline RL on the “Training” task and three others: “Easy Transfer”, “Medium Transfer”, and “Hard Transfer”. These datasets are ranked in the level of challenge that a task-aware agent has in generalization. Tasks increase in challenge when goals require moving farther away from the training goal, which can be either travelling further in the same direction (often easier) or entirely misaligned (for harder generalization). Due to the dynamics of the systems, farther-away targets may be easier to discover, e.g. the “Medium Transfer” target for Pointmass lies in the corner of the arena. Full details of these environments, along with experiments on three more, are available in Appendix A.4. The performance of the exploration agents and the task-aware MPO agent are varied across the tasks as shown in Fig. 6. Depicted is the mean reward achieved of offline RL policies trained on 3 random seeds for each of 3 datasets of 5 × 106 transitions. While collecting data specifically for the target downstream task is the best option when the data will be used only on that task, the task-aware performance can degrade on even a slightly misaligned test-task when compared to task-agnostic counterparts. The potential for multitask transfer of exploration agents is highlighted, but further work is needed in more open ended environments to show the potential of Explore2Offline. 5 DISCUSSION Explore2Offline points to interesting directions for further understanding and utilizing task-agnostic exploration agents. To start, there are two trends that point to a need for further work on exploration methods. On average across our evaluation suite, the random agent performs very closely to the curiosity-based methods, and any particular exploration method varies substantially across task. The performance of the random agent suggests some similarities in the data collected by the random agent and the curiosity-based methods. As mentioned previously, curiosity-based methods are exhaustive (given enough time), and do not consider useful trends that may be common in downstream tasks. This shows a need for future exploration methods to be able to prioritize interesting subsets of a state-space and generalize across domains to create flexible agents. Although our evaluation demonstrates the potential of using offline RL on task-agnostic data, there is substantial variation across task-agent pairings with the chosen static offline RL algorithm (CRR). This variation needs to be studied in more detail to better understand the limitations posed by the algorithm, and differentiate them from the quality of the data itself. The intrinsic MPC agent can be progressed by utilizing it in other forms of deep RL evaluation. By transferring a learned dynamics model, this flexible exploration agent could also be evaluated as a task-aware agent (i.e. zero-shot learning of a new task) or in online RL by weighting the intrinsic reward model and the environment reward (i.e. better explore-exploit balance). 6 CONCLUSION We introduce Explore2Offline, a method for utilizing task-agnostic data for policy learning of unknown downstream tasks. We describe how an agent can be used to collect the requisite data once for solving multiple tasks, and demonstrate performance comparable to an online learning agent. Additionally, we show policies trained on task-agnostic data may be robust to variations to the initial task compared to task-aware learning, resulting in better transfer performance. Finally, data from the new exploration agent, Intrinsic Model Predictive Control, performs strongly across many tasks. As offline RL emerges as a useful tool in more domains, a deeper understanding of the required data for learning will be needed. Directions for future work include developing better exploration methods specifically for offline training, and identification of experiences with high information for effective datasets. A APPENDIX Here we include additional experimental context and results. A.1 ALGORITHMIC DETAILS A summary of the exploration algorithm, Intrinsic Model Predictive Control is shown in Alg. 1. We utilize a distributed setup where multiple actors and learners can be deployed concurrently. Algorithm 1 Intrinsic MPC Given: Randomly initialized proposal πθ , dynamics model mφ, reward model ri, random critic Qψ . {Modules to be learned} Given: Planning probability pplan, replay buffer B, MPO loss weight α, learning rates & optimizers (ADAM) for the different modules. {Known modules and parameters} {Exploration loop – Asynchronously on the actors} while True do Initialize ENV and observe state s0. while episode is not terminated do {Choose between intrinsic planner and random action depending on pplan} {Use learned proposal (πθ) as proposal for planner}. sample x ∼ U[0, 1] at ∼ { PLANNER(st, πθ,mφ, r) if x ≤ pplan πθ(st) otherwise. Step ENV(st, at) → (st+1) and write transition to replay buffer B end while end while {Asynchronously on the learner} while True do Sample batch B of trajectories, each of sequence length T from the replay buffer B Label rewards with reward model ri. Update action-value function Qψ based on B using Retrace (Munos et al., 2016). Update model mφ based on B using multi-step. Update reward model ri based on B. Update proposal πθ based on B using (Byravan et al., 2021)) end while The PLANNER subroutine takes in the current state st, the action proposal πθ, a dynamics model mφ that predicts next state st+1 given current state st and action at, the reward function model ri(st, at). Optionally, a learned state-value function Vψ(s) (with parameters ψ) that predicts the expected return from state s can be provided. We use the Cross-Entropy Method (CEM) (Botev et al.), shown in Alg. 2. Algorithm 2 CEM planner Given: state s0, action proposal πθ, dynamics model mφ, reward model ri, planning horizon H , number of samples S, elite fraction E, noise standard deviation σinit, and number of iterations I . {Rollout proposal distribution using the model.} (s0, a0, s1, . . . , sH) ← proposal(mφ, πθ, H) µ← [a0, a1, . . . , aH] {initial plan} σ ← σinit {Evaluate candidate action sequences open loop according to the model and compute associated returns.} for i = 1 . . . I do for k = 1 . . . S do pk ∼ N (µ, σ) {Sample candidate actions.} rk ←evaluate actions(mφ, pk, H, ri) end for Rank candidate sequences by reward and retain top E fraction. Compute mean µelite and per-dim standard deviation σelite based on the retained elite sequences. µ← (1 − αmean)µ + αmeanµelite {Update mean; αmean = 0.9} σ ← (1 − αstd)σ + αstdσelite {Update standard deviation; αstd = 0.5} end for return first action in µ A.2 ADDITIONAL ENVIRONMENT DETAILS The state-actions dimensions (ds, da) and descriptions for the environments used in this paper are detailed in Tab. 3. Additional collected reward distributions are shown in Fig. 11. A.3 FULL OFFLINE RL AGENT-TASK PERFORMANCES To supplement the results discussed in Sec. 4.3, we have included the performance per dataset size for all agents across all tasks. The results are shown in Fig. 13 and show the considerable variation when studying any given agent or task. There is substantially more variation across task than across agent, showing the value in continuing to fine-tune a set of tasks for benchmarking task-agnostic agents. The mean performance across all tasks is shown in Fig. 9, with a per-task breakdown shown in Table 4. A subset of agents and tasks are shown in Fig. 8 to show the convergence on a set of tasks. A.4 ADDITIONAL MULTI-TASK LEARNING EXPERIMENTS To compliment Sec. 4.4, we have included additional experiments for multi-task learning in the Reacher, Cheetah, and Walker environments, shown in Fig. 10. For these tasks, there is less clear of a benefit of using task-agnostic learning to generate data for offline RL policy generation. In our experience, this limited performance can be due to the fact that the environments are designed with specific behaviors and algorithms in mind, reducing the need for a diverse exploration method. A description of the starting state and the goal state for each task, as well as for the Pointmass and Finger environments from the main text, is available in Table 4. Dataset Size
1. What is the focus and contribution of the paper regarding offline reinforcement learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its novelty and effectiveness? 3. Do you have any concerns or questions about the methodology, especially regarding the IMPC approach? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or potential improvements regarding the data requirements and exploration methods discussed in the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper explores how to collect informative data for offline RL methods. Many curiosity-based methods are considered to explore the environment. Intrinsic Model Predictive Control (IMPC) approach is proposed to improve the performance. Strengths And Weaknesses The idea and setting are novel and interesting. However, the method part is a little difficult to follow. the description of the IMPC is not clear. What are the input and output of IMPC? How do you train it? What is the main contribution? The experiments only choose MPO to verify the quality of the samples, which is not convincing enough. Since you propose a new framework and use the offline RL methods as a mechanism for evaluating exploration performance, I think more than 2 most popular methods should be used, The paper does not give a deeper analysis of the data requirement of the offline RL. In other words, in order to get better performance, which kind of data is required by the offline RL methods? How to construct such a dataset? There are some papers focusing on offline training and online fine-tuning to improve performance via fewer data. They also use exploration methods to guide online data collocation. Their methods can also be compared after slight modification. Clarity, Quality, Novelty And Reproducibility Code is not provided and detailed implementation is not very clear.
ICLR
Title The Challenges of Exploration for Offline Reinforcement Learning Abstract Offline Reinforcement Learning (ORL) enables us to separately study the two interlinked processes of reinforcement learning: collecting informative experience and inferring optimal behaviour. The second step has been widely studied in the offline setting, but just as critical to data-efficient RL is the collection of informative data. The task-agnostic setting for data collection, where the task is not known a priori, is of particular interest due to the possibility of collecting a single dataset and using it to solve several downstream tasks as they arise. We investigate this setting via curiosity-based intrinsic motivation, a family of exploration methods which encourage the agent to explore those states or transitions it has not yet learned to model. With Explore2Offline, we propose to evaluate the quality of collected data by transferring the collected data and inferring policies with reward relabelling and standard offline RL algorithms. We evaluate a wide variety of data collection strategies, including a new exploration agent, Intrinsic Model Predictive Control (IMPC), using this scheme and demonstrate their performance on various tasks. We use this decoupled framework to strengthen intuitions about exploration and the data prerequisites for effective offline RL. 1 INTRODUCTION The field of offline reinforcement learning (ORL) is growing quickly, motivated by its promise to use previously-collected datasets to produce new high-quality policies. It enables the disentangling of collection and inference processes underlying effective RL (Riedmiller et al., 2021). To date, the majority of research in the offline RL setting has focused on the inference side - the extraction of a performant policy given a dataset, but just as crucial is the development of the dataset itself. While challenges of the inference step are increasingly well investigated (Levine et al., 2020; Agarwal et al., 2020), we instead investigate the collection step. For evaluation, we investigate correlations between the properties of collected data and final performance, how much data is necessary, and the impact of different collection strategies. Whereas most existing benchmarks for ORL (Fu et al., 2020; Gulcehre et al., 2020) focus on the single-task setting with the task known a priori, we evaluate the potential of task-agnostic exploration methods to collect datasets for previously-unknown tasks. Task-agnostic data is an exciting avenue to pursue to illuminate potential tasks of interest in a space via unsupervised learning. In this setting, we transfer information from the unsupervised pretraining phase not via the policy (Yarats et al., 2021) but via the collected data. Historically the question of how to act - and therefore collect data - in RL has been studied through the exploration-exploitation trade-off, which amounts to a balance of an agent’s goals in solving a task immediately versus collecting data to perform better in the future. Task-agnostic exploration expands this well-studied direction towards how to explore in the absence of knowledge about current or future agent goals (Dasagi et al., 2019). In this work, we particularly focus on intrinsic motivation (Oudeyer & Kaplan, 2009), which explores novel states based on rewards derived from the agent’s internal information. These intrinsic rewards can take many forms, such as curiosity-based methods that learning a world model (Burda et al., 2018b; Pathak et al., 2017; Shyam et al., 2019), data-based methods that optimize statistical properties of the agent’s experience (Yarats et al., 2021), or competence-based metrics that extract skills (Eysenbach et al., 2018). In particular, we perform a wide study of data collected via curiosity-based exploration methods, similar to ExORL (Yarats et al., 2022). In addition, we introduce a novel method for effectively combining curiosity-based rewards with model predictive control. In Explore2Offline, we use offline RL as a mechanism for evaluating exploration performance of these curiosity-based models, which separates the fundamental feedback loop key to RL in order to disentangle questions of collection and inference Riedmiller et al. (2021) as displayed in Fig. 1. With this methodology, our paper has a series of contributions for understanding properties and applications of data collected by curiosity-based agents. Contribution 1: We propose Explore2Offline to combine offline RL and reward relabelling for transferring information gained in the data from task-agnostic exploration to downstream tasks. Our results showcase how experiences from intrinsic exploration can solve many tasks, partially reaching similar performance to state-of-the-art online RL data collection. Contribution 2: We propose Intrinsic Model Predictive Control (IMPC) which combines a learned dynamics model and a curiosity approach to enable online planning for exploration to minimize the potential of stale intrinsic rewards. A large sweep over existing and new methods shows where task-agnostic exploration succeeds and where it fails. Contribution 3: By investigating multi-task downstream learning, we highlight a further strength of task-agnostic data collection where each datapoint can be assigned multiple rewards in hindsight. 2 RELATED WORKS 2.1 CURIOSITY-DRIVEN EXPLORATION Intrinsic exploration is a well studied direction in reinforcement learning with the goal of enabling agents to generate compelling behavior in any environment by having an internal reward representation. Curiosity-driven learning uses learned models to reward agents that reach states with high modelling error or uncertainty. Many recent works use the prediction error of a learned neural network model to reward agents’ that see new states Burda et al. (2018b); Pathak et al. (2017). Often, the intrinsic curiosity agents are trained with on-policy RL algorithms such as Proximal Policy Optimization (PPO) to maintain recent reward labels for visited states. Burda et al. (2018a) did a wide study on different intrinsic reward models, focusing on pixel-based learning. Instead, we use offpolicy learning and re-label the intrinsic rewards associated with a tuple when learning the policy. Other strategies for using learned dynamics models to explore is to reward agents based on the variance of the predictions Pathak et al.; Sekar et al. or the value function (Lowrey et al., 2018). We build on recent advancements in intrinsic curiosity with the Intrinsic Model Predictive Control agent that has two new properties: online planning of states to explore and using a separate reward model from the dynamics model used for control. 2.2 UNSUPERVISED PRETRAINING IN RL Recent works have proposed a two-phase RL setting consisting of a long “pretraining” phase in a version of the environment without rewards, and a sample-limited “task learning” phase with visible rewards (Schwarzer et al., 2021). In this setting the agent attempts to learn task-agnostic information about the environment in the first phase, then rapidly re-explore to find rewards and produce a policy specialized to the task. Various methods have addressed this setting with diverse policy ensembles, such as a policy conditioned on a random variable whose marginal state distribution exhibits high coverage (Eysenbach et al., 2018) or finding a set of policies with diverse successor features (Hansen et al., 2020). Similarly Liu & Abbeel (2021) learn a single policy which approximately maximizes the estimated entropy of its state distribution in a contrastive representation space. Another strategy collects diverse data during the pretraining phase and uses it to learn representations and exploration rewards that are beneficial for downstream tasks (Yarats et al., 2021). A central focus of all of these methods is delivering agents which can explore efficiently at task learning time. While the pretraining and data collection phases of unsupervised RL pretraining and Explore2Offline (respectively) are similar, in Explore2Offline the task learning phase is performed on relabeled offline transitions, enabling us to use information acquired throughout training and not only the final policy. 2.3 EVALUATING TASK-AGNOSTIC EXPLORATION While there are many proposed methods for exploration, evaluation of exploration methods is varied. Recent work in exploration have proposed a variety of evaluation metrics, including fine-tuning of agents post-exploration (Laskin et al., 2021), sample-efficiency and peak performance of online RL (Whitney et al., 2021), zero-shot transfer of learned dynamics models (Sekar et al.), multienvironment transfer (Parisi et al., 2021), and skill extraction to a separate curriculum (Groth et al., 2021). Task-agnostic exploration has been investigated via random data (Cabi et al., 2019) and intrinsic motivation as a source of data for offline RL (Dasagi et al., 2019; Endrawis et al., 2021), but has only been evaluated in the single-task setting and limited by current ORL implementations. Offline RL is a compelling candidate for evaluating exploration data because of its emerging ability to generalize across experiences in addition to imitating useful behaviors. Complementary work echoes the importance of data collection for offline RL from the perspective of unsupervised RL (Yarats et al., 2022), while our work focuses more on the relationship between the exploration challenges of an environment and how a new exploration algorithm could address current data generation shortcomings. 2.4 OFFLINE REINFORCEMENT LEARNING With Offline Reinforcement Learning, we decouple the learning mechanism from exploration by training agents from fixed datasets. Various recent methods have demonstrated strong performance in the offline setting (Wang et al., 2020; Kumar et al., 2020; Peng et al., 2019; Fujimoto & Gu, 2021). In Explore2Offline we use a variant of Critic Regularised Regression (Wang et al., 2020). Many datasets and benchmarks such as D4RL (Fu et al., 2020) and RL Unplugged (Gulcehre et al., 2020) have been proposed to investigate different approaches. The use of offline datasets has even been extended to improve online RL performance (Nair et al., 2020). Our goal is related; instead of investigating multiple offline RL approaches, we investigate mechanisms to generate datasets for downstream tasks. Analysis over the desired state-action and reward distributions for ORL are studied, but little work is done to address how best to generate this data (Schweighofer et al., 2021). On the theory side, recent works have investigated the Explore2Offline setting, which they call “reward-free exploration” (Jin et al., 2020; Kaufmann et al., 2021). These works study algorithms which guarantee the discovery of ε-optimal policies after polynomially many episodes of taskagnostic data collection, though the algorithms they study are not straightforwardly applicable to the high-dimensional deep RL setting with function approximation. 2.5 REWARD RELABELLING By using off-policy or even offline learning, data generated for one task and reward can be applied to learn a variety of potential tasks. In off-policy RL, we can identify useful rewards for an existing trajectory based on later states from the same trajectories (Andrychowicz et al., 2017), uncertainty over a trajectory (Nasiriany et al., 2021), distribution of goals (Nasiriany et al., 2021), related tasks (Riedmiller et al., 2021; Wulfmeier et al., 2019), agent-intrinsic tasks (Wulfmeier et al., 2021), via inverse reinforcement learning (Eysenbach et al., 2020) as well as other mechanisms (Li et al., 2020). In the context of pure offline RL, we can go one step further as we are not required to find the optimal tasks for stored trajectory data. In this setting, data can be used for learning with a massive set of rewards such as all states visited along stored trajectories (Chebotar et al., 2021). We will evaluate our approaches for exploration across downstream tasks and relabel data with all possible tasks. 3 METHODOLOGY 3.1 REINFORCEMENT LEARNING Reinforcement Learning (RL) is a framework where an agent interacts with an environment to solve a task by trial and error. The objective of an agent is often to maximize the cumulative future reward on a predetermined task, E[∑∞τ=0 γτrτ ∣s0 = st].We utilize the setting where an agent’s interactions with an environment are modeled as a Markov Decision Process (MDP). A MDP is defined by a state of the environment s, an action a that is taken by an agent according to a policy πθ(st), a transition function p(st+1∣st, at) governing the next state distribution, and a discount factor γ ∈ [0, 1] weighting future rewards. With a transition in dynamics, the agent receives a reward rt from the environment and stores the SARS data in a dataset D ∶ {sk, ak, rk, sk+1}. Alternatively to this environment-centric reward formulation is the concept of intrinsic rewards, where the agent maximizes an internal notion of reward in an task-agnostic manner to collect data. 3.2 CURIOSITY-DRIVEN EXPLORATION Existing Methods Reaching new, valuable areas of the state-space is crucial to solving sparse tasks with RL. One method to balance attaining new experiences, exploration, with the goal of solving a task, exploitation, is using curiosity models. Curiosity models are a subset of intrinsic rewards an agent can use to explore by creating a reward signal, rint.. These models encourage exploration by optimizing the signal from a learned model that corresponds to a modeling error or uncertainty, which often occurs at states that have not been visited frequently. We deploy a series of intrinsic models: the simplest, Next Step Model Error maximizes the error of a learned one-step model rint. = ∥ŝt+1 − st+1∥2, Random Network Distillation (RND) maximizes the distance of a learned state encoding to that of a static encoding rint. = ∥η̂(st) − η(s)∥2 (Burda et al., 2018b), the Intrinsic Curiosity Module (ICM) maximizes the error on a forward dynamics model learned in the latent space, φ(s), of a inverse dynamics model rint. = ∥φ̂t − φ∥2 (Pathak et al., 2017), and Dynamics Disagreement (DD) maximizes the variance of an ensemble of learned one-step dynamics models rint. = σ(ŝit+1) (Pathak et al.). Intrinsic Model Predictive Control Model Predictive Control (MPC) on a learned model has been used for control across a variety of simulated an real world settings (Wieber; Camacho & Alba, 2013), including recently with modelbased reinforcement learning (MBRL) algorithms (Williams et al., 2017; Chua et al., 2018; Lambert et al., 2019). MBRL using MPC is an iterative loop of learning a predictive model of environment dynamics fθ(⋅) (e.g. a one-step transition model), and acting in the environment through the use of model based planning with the learned model. This planning step usually involves optimizing for a sequence of actions that maximizes the expected future reward (Eqn. 1), for example, via sample based optimization; the MPC loop executes the first action of this sequence followed by replanning. a = argmax at∶t+τ τ ∑ t r(ŝt, at), s.t. ŝt+1 = fθ(st, at). (1) The reward function r defines the behavior of the planned sequence of actions in model-based planning. For task-specific RL this can be the task reward function (known or estimated from data). Instead, our Intrinsic MPC (IMPC) agent uses a curiosity based reward for planning in order to encourage task agnostic exploration by reasoning about what states are currently interesting and novel. The goal of planning being used to visit new interesting states, rather than states that were recorded with high intrinsic reward in the replay memory is shown in Fig. 2. This evaluation occurs by sampling action sequences, unrolling them using the forward dynamics model, scoring the rollouts with the learned curiosity model, and finally taking the first action of the sequence with the highest score. Given that this evaluation happens with access to only imagined states and a proposed action, only a subset of intrinsic models can be used with planning, as summarized in Tab. 1. We primarily evaluate IMPC using the RND curiosity model, but we also present results with the DD model. We use the Cross Entropy Method (CEM) (De Boer et al.), a sample based optimization procedure for planning. Inspired by prior work Byravan et al. (2021) we use a policy to generate action candidates for the planner; this policy is trained using the Maximum a-posteriori Policy Optimization (MPO) algorithm (Abdolmaleki et al., 2018) from data generated by the MPC actor. Additionally, to amortize the cost of planning we interleave planning with directly executing actions sampled from the learned policy. This is achieved by specifying a planning probability 0 < ρ < 1; at each step in the actor loop we choose either to plan or execute the policy action according to ρ (we use ρ = 0.9). Additional algorithmic details are included in Appendix A.1. 3.3 OFFLINE REINFORCEMENT LEARNING To train an agent offline from task-agnostic exploration data, we determine rewards from observations in hindsight. While the approach relies on the ability to compute rewards based on observations, a large set of tasks can be described in this manner (Li et al., 2020). In comparison to online learning, we have the benefit that we do not need to determine for which task data is most informative given a commensurate number of tasks. Instead we can relabel the data with all possible rewards to maximise its utility. Given the new task rewards, we replace the intrinsic reward in our trajectory data and apply a variant of a recent state-of-the-art offline RL algorithm, Critic Regularised Regression (CRR) Wang et al. (2020). While we apply CRR for our investigation, the overall method is general in that it could be applied with other approaches. We iteratively update critic and actor optimising their respective losses following Equation 2 and 3. LQ = EB[D(Qθ(st, at), (rt + γEat∼π(st+1)Qθ′(st+1, at)))], (2) Since we use a distributional categorical critic, we apply the divergence measure D instead of the squared Euclidean loss, following (Bellemare et al.). With f = ReLU(Â(st, at)) and  the advantage estimator via Qπ(st, at) − 1/m∑Ni=1Qπ(st, ai), the policy is optimized as: π(at∣st) = argmax π E(st,at)∼B[f(Qπ, π, st, at) log π(a∣s)]. (3) 4 EXPERIMENTS 4.1 EXPERIMENTAL SETTING Exploration Agents In this work we benchmark a variety of task-agnostic exploration agents. We classify agents as reactive, selecting actions with a policy, or planning, selecting actions by optimizing over trajectories. The reactive agents are trained with Maximum Apriori Optimization (MPO) (Abdolmaleki et al., 2018) and include the curiosity models RND, DD, ICM, and NS. We compare these to IMPC with DD and RND optimized with CEM and a fixed random agent that samples from the action distribution. Some figures will include a label of MPO, which corresponds to the benchmark of the task-aware RL agents, which provides interesting context to Explore2Offline. The online agent represents the state-of-the-art performance when the task is known a priori – matching it without an environment reward function would highlight the potential of Explore2Offline. Environments In this work, we investigate the exploration performance of a variety of DeepMind Control Suite tasks (Tassa et al., 2018). We evaluate 4 domains (Ball-in-Cup, Finger, Reacher, and Walker) including 14 tasks with a variety of state-action sizes, with further details included in the Appendix. In order to include more challenging environments for task-agnostic exploration, we use modifications proposed by Whitney et al. (2021), Explore Suite, which include constrained initial states and sparser reward functions. 4.2 TASK-AGNOSTIC DATA COLLECTION We compare the amount of task-reward received per-episode across a variety of agents and tasks, which is an intuitive metric for exploration agent performance but can only act as a proxy for exploration. To evaluate taskagnostic reward, the exploration agents are run without access to the environment rewards, with the rewards are relabelled after. The distributions of normalized reward achieved during the 5000 training episodes for four example tasks are documented for all of the exploration agents in Fig. 3. Crucially, the Explore Suite variants are challenging for the random agent across its lifetime, where all of the examples shown have median episode-reward of 0. There is a wide diversity in the agent-task pairings by proxy of measuring experienced reward, showcasing the large potential of future work to better understand this area. To give an overall view of data collection for each agent, we show in Fig. 4 (left) the Control and Explore Suite average collected reward across all tasks. The median, 90th and 10th percentiles are Finger, Turn-Easy Finger, Turn-Easy Explore Walker, Walk Walker, Walk Explore shown for each agent, combined across tasks. Here we also show what reward a task-aware agent (MPO) will collect in its lifetime. It is included to indicate an upper target for exploration, rather than a competitive baseline. An important artifact here is the random agent’s flatness of reward achieved across time – the other exploration agents show an increase in median reward as the dataset size grows (especially on Explore Suite). This change in reward distribution across dataset size indicates a diversity of behaviors in the intrinsic exploration agents, while the random agent receives reward from the same distribution repeatedly. This can be seen as a curiosity-based agent such as IMPC with RND achieves varied rewards and the random agent has a repeated reward distribution. While the collected reward can be a indicator of the usefulness of an exploration agent, it is not directly transferable to a task-focused policy capable of solving tasks. Without careful environment design, task-agnostic agents focusing on novel states will cover the entire state-space regardless of the predefined downstream task. 4.3 EVALUATING OFFLINE RL ON EXPLORATION DATA We study the performance of intrinsic agents via a SOTA offline reinforcement learning algorithm, Critic Regularized Regression (CRR) (Wang et al., 2020). For each agent, 3 policies were trained on dataset sizes within the range of 2 × 103 to 5 × 106 environment steps. The mean of the performance across all explore and control suite tasks with confidence intervals is shown in Fig. 4 (right). Due to the width of this study, we were only able to evaluate the offline RL performance on 3 seeds per each collected dataset – the median, max and min across these three policies from CRR are shown in each subplot. Here, we see three core findings that we will continue to detail: 1) for dataset sizes < 1 × 105, there is little benefit to task-aware learning for data collection and the random agent is a strong baseline versus other intrinsic model-based agents; 2) on larger datasets, the task-aware RL method, MPO, jumps ahead of the explorers, but the exploration agents all continue to improve in offline RL performance with more data; 3) the novel IMPC approach with RND, along with existing methods of MPO with RND or DD, performs best on average with the largest datasets. To showcase which tasks are solved by the Explore2Offline framework, we document the median performance of the exploration agents with the full 5 × 106 steps training set size versus the taskaware MPO in Tab. 2. Explore2Offline with these agents solve all but the Walker domain tasks, with further results included in the Appendix. To highlight why dataset size and observed reward are such powerful indicators of ORL performance, we show in Fig. 5 the correlation for the Finger Turn and Walker Walk tasks of the cumulative reward in a dataset versus the offline RL performance for that dataset. There is a clear trend of more reward resulting in a better policy for the tasks paired with the environment. Fig. 7 uses Spearman’s rank correlation to visualise how dataset size is a considerably better predictor of performance than any reward statistics including mean, sum or 80% quantile. This further emphasises the importance to transfer via increasingly large datasets instead instead of the final exploration policy. In the next section we will evaluate how re-using data from task-agnostic exploration can enable multi-task performance. 4.4 EVALUATING TASK-AGNOSTIC DATA FOR MULTITASK RL A key motivation for collecting task-agnostic data is its applicability when there are a variety of downstream tasks of interest, including those which might not be known at data collection time. In the ideal case, task-agnostic exploration could collect a single dataset, then offline RL could consume that data (along with relabeled rewards) to solve arbitrary tasks. We evaluate the quality of datasets collected by various exploration agents for use with multiple downstream tasks. For this evaluation we collected one dataset for the Pointmass and Finger environments using each of seven exploration agents, then trained policies for downstream tasks by relabeling the data with different reward functions. Each of the tasks is defined by a sparse +1 reward corresponding to a particular goal state. As a baseline, the Online agent collects experience using a standard online RL algorithm as it learns to solve a “Training” task, while all of the other data collection agents are task agnostic. The datasets collected by each agent are evaluated with offline RL on the “Training” task and three others: “Easy Transfer”, “Medium Transfer”, and “Hard Transfer”. These datasets are ranked in the level of challenge that a task-aware agent has in generalization. Tasks increase in challenge when goals require moving farther away from the training goal, which can be either travelling further in the same direction (often easier) or entirely misaligned (for harder generalization). Due to the dynamics of the systems, farther-away targets may be easier to discover, e.g. the “Medium Transfer” target for Pointmass lies in the corner of the arena. Full details of these environments, along with experiments on three more, are available in Appendix A.4. The performance of the exploration agents and the task-aware MPO agent are varied across the tasks as shown in Fig. 6. Depicted is the mean reward achieved of offline RL policies trained on 3 random seeds for each of 3 datasets of 5 × 106 transitions. While collecting data specifically for the target downstream task is the best option when the data will be used only on that task, the task-aware performance can degrade on even a slightly misaligned test-task when compared to task-agnostic counterparts. The potential for multitask transfer of exploration agents is highlighted, but further work is needed in more open ended environments to show the potential of Explore2Offline. 5 DISCUSSION Explore2Offline points to interesting directions for further understanding and utilizing task-agnostic exploration agents. To start, there are two trends that point to a need for further work on exploration methods. On average across our evaluation suite, the random agent performs very closely to the curiosity-based methods, and any particular exploration method varies substantially across task. The performance of the random agent suggests some similarities in the data collected by the random agent and the curiosity-based methods. As mentioned previously, curiosity-based methods are exhaustive (given enough time), and do not consider useful trends that may be common in downstream tasks. This shows a need for future exploration methods to be able to prioritize interesting subsets of a state-space and generalize across domains to create flexible agents. Although our evaluation demonstrates the potential of using offline RL on task-agnostic data, there is substantial variation across task-agent pairings with the chosen static offline RL algorithm (CRR). This variation needs to be studied in more detail to better understand the limitations posed by the algorithm, and differentiate them from the quality of the data itself. The intrinsic MPC agent can be progressed by utilizing it in other forms of deep RL evaluation. By transferring a learned dynamics model, this flexible exploration agent could also be evaluated as a task-aware agent (i.e. zero-shot learning of a new task) or in online RL by weighting the intrinsic reward model and the environment reward (i.e. better explore-exploit balance). 6 CONCLUSION We introduce Explore2Offline, a method for utilizing task-agnostic data for policy learning of unknown downstream tasks. We describe how an agent can be used to collect the requisite data once for solving multiple tasks, and demonstrate performance comparable to an online learning agent. Additionally, we show policies trained on task-agnostic data may be robust to variations to the initial task compared to task-aware learning, resulting in better transfer performance. Finally, data from the new exploration agent, Intrinsic Model Predictive Control, performs strongly across many tasks. As offline RL emerges as a useful tool in more domains, a deeper understanding of the required data for learning will be needed. Directions for future work include developing better exploration methods specifically for offline training, and identification of experiences with high information for effective datasets. A APPENDIX Here we include additional experimental context and results. A.1 ALGORITHMIC DETAILS A summary of the exploration algorithm, Intrinsic Model Predictive Control is shown in Alg. 1. We utilize a distributed setup where multiple actors and learners can be deployed concurrently. Algorithm 1 Intrinsic MPC Given: Randomly initialized proposal πθ , dynamics model mφ, reward model ri, random critic Qψ . {Modules to be learned} Given: Planning probability pplan, replay buffer B, MPO loss weight α, learning rates & optimizers (ADAM) for the different modules. {Known modules and parameters} {Exploration loop – Asynchronously on the actors} while True do Initialize ENV and observe state s0. while episode is not terminated do {Choose between intrinsic planner and random action depending on pplan} {Use learned proposal (πθ) as proposal for planner}. sample x ∼ U[0, 1] at ∼ { PLANNER(st, πθ,mφ, r) if x ≤ pplan πθ(st) otherwise. Step ENV(st, at) → (st+1) and write transition to replay buffer B end while end while {Asynchronously on the learner} while True do Sample batch B of trajectories, each of sequence length T from the replay buffer B Label rewards with reward model ri. Update action-value function Qψ based on B using Retrace (Munos et al., 2016). Update model mφ based on B using multi-step. Update reward model ri based on B. Update proposal πθ based on B using (Byravan et al., 2021)) end while The PLANNER subroutine takes in the current state st, the action proposal πθ, a dynamics model mφ that predicts next state st+1 given current state st and action at, the reward function model ri(st, at). Optionally, a learned state-value function Vψ(s) (with parameters ψ) that predicts the expected return from state s can be provided. We use the Cross-Entropy Method (CEM) (Botev et al.), shown in Alg. 2. Algorithm 2 CEM planner Given: state s0, action proposal πθ, dynamics model mφ, reward model ri, planning horizon H , number of samples S, elite fraction E, noise standard deviation σinit, and number of iterations I . {Rollout proposal distribution using the model.} (s0, a0, s1, . . . , sH) ← proposal(mφ, πθ, H) µ← [a0, a1, . . . , aH] {initial plan} σ ← σinit {Evaluate candidate action sequences open loop according to the model and compute associated returns.} for i = 1 . . . I do for k = 1 . . . S do pk ∼ N (µ, σ) {Sample candidate actions.} rk ←evaluate actions(mφ, pk, H, ri) end for Rank candidate sequences by reward and retain top E fraction. Compute mean µelite and per-dim standard deviation σelite based on the retained elite sequences. µ← (1 − αmean)µ + αmeanµelite {Update mean; αmean = 0.9} σ ← (1 − αstd)σ + αstdσelite {Update standard deviation; αstd = 0.5} end for return first action in µ A.2 ADDITIONAL ENVIRONMENT DETAILS The state-actions dimensions (ds, da) and descriptions for the environments used in this paper are detailed in Tab. 3. Additional collected reward distributions are shown in Fig. 11. A.3 FULL OFFLINE RL AGENT-TASK PERFORMANCES To supplement the results discussed in Sec. 4.3, we have included the performance per dataset size for all agents across all tasks. The results are shown in Fig. 13 and show the considerable variation when studying any given agent or task. There is substantially more variation across task than across agent, showing the value in continuing to fine-tune a set of tasks for benchmarking task-agnostic agents. The mean performance across all tasks is shown in Fig. 9, with a per-task breakdown shown in Table 4. A subset of agents and tasks are shown in Fig. 8 to show the convergence on a set of tasks. A.4 ADDITIONAL MULTI-TASK LEARNING EXPERIMENTS To compliment Sec. 4.4, we have included additional experiments for multi-task learning in the Reacher, Cheetah, and Walker environments, shown in Fig. 10. For these tasks, there is less clear of a benefit of using task-agnostic learning to generate data for offline RL policy generation. In our experience, this limited performance can be due to the fact that the environments are designed with specific behaviors and algorithms in mind, reducing the need for a diverse exploration method. A description of the starting state and the goal state for each task, as well as for the Pointmass and Finger environments from the main text, is available in Table 4. Dataset Size
1. What is the focus and contribution of the paper regarding offline reinforcement learning? 2. What are the strengths of the proposed approach, particularly in terms of curiosity-based intrinsic motivation methods? 3. What are the weaknesses of the paper, especially regarding its relation to prior works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper studies offline RL from a less-studied perspective - collecting informative experiences. The authors investigate the task-agnostic setting based on curiosity-based intrinsic motivation methods. The authors propose the Explore2Offline framework, and conduct an extensive empirical study to investing the effect of data collection strategies. Strengths And Weaknesses Strengths The paper is clearly written and easy to follow. The authors also conduct extensive experiments to investigate data collection in offline RL based on a number of intrinsic motivation based exploration algorithms to reveal interesting findings. Weakness My main concern for the paper is its novelty. Although I acknowledge that the paper studies offline RL from a less-studied perspective (data collection instead of policy learning that aims to address extrapolation error) which is very interesting, it seems that it is very related to [1] without enough discussion about the differences. Specifically, [1] also studies data collection in offline RL, and it is worth discussing the differences between them. [1] Denis Yarats, David Brandfonbrener, Hao Liu, Michael Laskin, Pieter Abbeel, Alessandro Lazaric, and Lerrel Pinto. Don’t change the algorithm, change the data: Exploratory data for offline reinforcement learning, 2022. Clarity, Quality, Novelty And Reproducibility Please see my comments in the above section.
ICLR
Title The Challenges of Exploration for Offline Reinforcement Learning Abstract Offline Reinforcement Learning (ORL) enables us to separately study the two interlinked processes of reinforcement learning: collecting informative experience and inferring optimal behaviour. The second step has been widely studied in the offline setting, but just as critical to data-efficient RL is the collection of informative data. The task-agnostic setting for data collection, where the task is not known a priori, is of particular interest due to the possibility of collecting a single dataset and using it to solve several downstream tasks as they arise. We investigate this setting via curiosity-based intrinsic motivation, a family of exploration methods which encourage the agent to explore those states or transitions it has not yet learned to model. With Explore2Offline, we propose to evaluate the quality of collected data by transferring the collected data and inferring policies with reward relabelling and standard offline RL algorithms. We evaluate a wide variety of data collection strategies, including a new exploration agent, Intrinsic Model Predictive Control (IMPC), using this scheme and demonstrate their performance on various tasks. We use this decoupled framework to strengthen intuitions about exploration and the data prerequisites for effective offline RL. 1 INTRODUCTION The field of offline reinforcement learning (ORL) is growing quickly, motivated by its promise to use previously-collected datasets to produce new high-quality policies. It enables the disentangling of collection and inference processes underlying effective RL (Riedmiller et al., 2021). To date, the majority of research in the offline RL setting has focused on the inference side - the extraction of a performant policy given a dataset, but just as crucial is the development of the dataset itself. While challenges of the inference step are increasingly well investigated (Levine et al., 2020; Agarwal et al., 2020), we instead investigate the collection step. For evaluation, we investigate correlations between the properties of collected data and final performance, how much data is necessary, and the impact of different collection strategies. Whereas most existing benchmarks for ORL (Fu et al., 2020; Gulcehre et al., 2020) focus on the single-task setting with the task known a priori, we evaluate the potential of task-agnostic exploration methods to collect datasets for previously-unknown tasks. Task-agnostic data is an exciting avenue to pursue to illuminate potential tasks of interest in a space via unsupervised learning. In this setting, we transfer information from the unsupervised pretraining phase not via the policy (Yarats et al., 2021) but via the collected data. Historically the question of how to act - and therefore collect data - in RL has been studied through the exploration-exploitation trade-off, which amounts to a balance of an agent’s goals in solving a task immediately versus collecting data to perform better in the future. Task-agnostic exploration expands this well-studied direction towards how to explore in the absence of knowledge about current or future agent goals (Dasagi et al., 2019). In this work, we particularly focus on intrinsic motivation (Oudeyer & Kaplan, 2009), which explores novel states based on rewards derived from the agent’s internal information. These intrinsic rewards can take many forms, such as curiosity-based methods that learning a world model (Burda et al., 2018b; Pathak et al., 2017; Shyam et al., 2019), data-based methods that optimize statistical properties of the agent’s experience (Yarats et al., 2021), or competence-based metrics that extract skills (Eysenbach et al., 2018). In particular, we perform a wide study of data collected via curiosity-based exploration methods, similar to ExORL (Yarats et al., 2022). In addition, we introduce a novel method for effectively combining curiosity-based rewards with model predictive control. In Explore2Offline, we use offline RL as a mechanism for evaluating exploration performance of these curiosity-based models, which separates the fundamental feedback loop key to RL in order to disentangle questions of collection and inference Riedmiller et al. (2021) as displayed in Fig. 1. With this methodology, our paper has a series of contributions for understanding properties and applications of data collected by curiosity-based agents. Contribution 1: We propose Explore2Offline to combine offline RL and reward relabelling for transferring information gained in the data from task-agnostic exploration to downstream tasks. Our results showcase how experiences from intrinsic exploration can solve many tasks, partially reaching similar performance to state-of-the-art online RL data collection. Contribution 2: We propose Intrinsic Model Predictive Control (IMPC) which combines a learned dynamics model and a curiosity approach to enable online planning for exploration to minimize the potential of stale intrinsic rewards. A large sweep over existing and new methods shows where task-agnostic exploration succeeds and where it fails. Contribution 3: By investigating multi-task downstream learning, we highlight a further strength of task-agnostic data collection where each datapoint can be assigned multiple rewards in hindsight. 2 RELATED WORKS 2.1 CURIOSITY-DRIVEN EXPLORATION Intrinsic exploration is a well studied direction in reinforcement learning with the goal of enabling agents to generate compelling behavior in any environment by having an internal reward representation. Curiosity-driven learning uses learned models to reward agents that reach states with high modelling error or uncertainty. Many recent works use the prediction error of a learned neural network model to reward agents’ that see new states Burda et al. (2018b); Pathak et al. (2017). Often, the intrinsic curiosity agents are trained with on-policy RL algorithms such as Proximal Policy Optimization (PPO) to maintain recent reward labels for visited states. Burda et al. (2018a) did a wide study on different intrinsic reward models, focusing on pixel-based learning. Instead, we use offpolicy learning and re-label the intrinsic rewards associated with a tuple when learning the policy. Other strategies for using learned dynamics models to explore is to reward agents based on the variance of the predictions Pathak et al.; Sekar et al. or the value function (Lowrey et al., 2018). We build on recent advancements in intrinsic curiosity with the Intrinsic Model Predictive Control agent that has two new properties: online planning of states to explore and using a separate reward model from the dynamics model used for control. 2.2 UNSUPERVISED PRETRAINING IN RL Recent works have proposed a two-phase RL setting consisting of a long “pretraining” phase in a version of the environment without rewards, and a sample-limited “task learning” phase with visible rewards (Schwarzer et al., 2021). In this setting the agent attempts to learn task-agnostic information about the environment in the first phase, then rapidly re-explore to find rewards and produce a policy specialized to the task. Various methods have addressed this setting with diverse policy ensembles, such as a policy conditioned on a random variable whose marginal state distribution exhibits high coverage (Eysenbach et al., 2018) or finding a set of policies with diverse successor features (Hansen et al., 2020). Similarly Liu & Abbeel (2021) learn a single policy which approximately maximizes the estimated entropy of its state distribution in a contrastive representation space. Another strategy collects diverse data during the pretraining phase and uses it to learn representations and exploration rewards that are beneficial for downstream tasks (Yarats et al., 2021). A central focus of all of these methods is delivering agents which can explore efficiently at task learning time. While the pretraining and data collection phases of unsupervised RL pretraining and Explore2Offline (respectively) are similar, in Explore2Offline the task learning phase is performed on relabeled offline transitions, enabling us to use information acquired throughout training and not only the final policy. 2.3 EVALUATING TASK-AGNOSTIC EXPLORATION While there are many proposed methods for exploration, evaluation of exploration methods is varied. Recent work in exploration have proposed a variety of evaluation metrics, including fine-tuning of agents post-exploration (Laskin et al., 2021), sample-efficiency and peak performance of online RL (Whitney et al., 2021), zero-shot transfer of learned dynamics models (Sekar et al.), multienvironment transfer (Parisi et al., 2021), and skill extraction to a separate curriculum (Groth et al., 2021). Task-agnostic exploration has been investigated via random data (Cabi et al., 2019) and intrinsic motivation as a source of data for offline RL (Dasagi et al., 2019; Endrawis et al., 2021), but has only been evaluated in the single-task setting and limited by current ORL implementations. Offline RL is a compelling candidate for evaluating exploration data because of its emerging ability to generalize across experiences in addition to imitating useful behaviors. Complementary work echoes the importance of data collection for offline RL from the perspective of unsupervised RL (Yarats et al., 2022), while our work focuses more on the relationship between the exploration challenges of an environment and how a new exploration algorithm could address current data generation shortcomings. 2.4 OFFLINE REINFORCEMENT LEARNING With Offline Reinforcement Learning, we decouple the learning mechanism from exploration by training agents from fixed datasets. Various recent methods have demonstrated strong performance in the offline setting (Wang et al., 2020; Kumar et al., 2020; Peng et al., 2019; Fujimoto & Gu, 2021). In Explore2Offline we use a variant of Critic Regularised Regression (Wang et al., 2020). Many datasets and benchmarks such as D4RL (Fu et al., 2020) and RL Unplugged (Gulcehre et al., 2020) have been proposed to investigate different approaches. The use of offline datasets has even been extended to improve online RL performance (Nair et al., 2020). Our goal is related; instead of investigating multiple offline RL approaches, we investigate mechanisms to generate datasets for downstream tasks. Analysis over the desired state-action and reward distributions for ORL are studied, but little work is done to address how best to generate this data (Schweighofer et al., 2021). On the theory side, recent works have investigated the Explore2Offline setting, which they call “reward-free exploration” (Jin et al., 2020; Kaufmann et al., 2021). These works study algorithms which guarantee the discovery of ε-optimal policies after polynomially many episodes of taskagnostic data collection, though the algorithms they study are not straightforwardly applicable to the high-dimensional deep RL setting with function approximation. 2.5 REWARD RELABELLING By using off-policy or even offline learning, data generated for one task and reward can be applied to learn a variety of potential tasks. In off-policy RL, we can identify useful rewards for an existing trajectory based on later states from the same trajectories (Andrychowicz et al., 2017), uncertainty over a trajectory (Nasiriany et al., 2021), distribution of goals (Nasiriany et al., 2021), related tasks (Riedmiller et al., 2021; Wulfmeier et al., 2019), agent-intrinsic tasks (Wulfmeier et al., 2021), via inverse reinforcement learning (Eysenbach et al., 2020) as well as other mechanisms (Li et al., 2020). In the context of pure offline RL, we can go one step further as we are not required to find the optimal tasks for stored trajectory data. In this setting, data can be used for learning with a massive set of rewards such as all states visited along stored trajectories (Chebotar et al., 2021). We will evaluate our approaches for exploration across downstream tasks and relabel data with all possible tasks. 3 METHODOLOGY 3.1 REINFORCEMENT LEARNING Reinforcement Learning (RL) is a framework where an agent interacts with an environment to solve a task by trial and error. The objective of an agent is often to maximize the cumulative future reward on a predetermined task, E[∑∞τ=0 γτrτ ∣s0 = st].We utilize the setting where an agent’s interactions with an environment are modeled as a Markov Decision Process (MDP). A MDP is defined by a state of the environment s, an action a that is taken by an agent according to a policy πθ(st), a transition function p(st+1∣st, at) governing the next state distribution, and a discount factor γ ∈ [0, 1] weighting future rewards. With a transition in dynamics, the agent receives a reward rt from the environment and stores the SARS data in a dataset D ∶ {sk, ak, rk, sk+1}. Alternatively to this environment-centric reward formulation is the concept of intrinsic rewards, where the agent maximizes an internal notion of reward in an task-agnostic manner to collect data. 3.2 CURIOSITY-DRIVEN EXPLORATION Existing Methods Reaching new, valuable areas of the state-space is crucial to solving sparse tasks with RL. One method to balance attaining new experiences, exploration, with the goal of solving a task, exploitation, is using curiosity models. Curiosity models are a subset of intrinsic rewards an agent can use to explore by creating a reward signal, rint.. These models encourage exploration by optimizing the signal from a learned model that corresponds to a modeling error or uncertainty, which often occurs at states that have not been visited frequently. We deploy a series of intrinsic models: the simplest, Next Step Model Error maximizes the error of a learned one-step model rint. = ∥ŝt+1 − st+1∥2, Random Network Distillation (RND) maximizes the distance of a learned state encoding to that of a static encoding rint. = ∥η̂(st) − η(s)∥2 (Burda et al., 2018b), the Intrinsic Curiosity Module (ICM) maximizes the error on a forward dynamics model learned in the latent space, φ(s), of a inverse dynamics model rint. = ∥φ̂t − φ∥2 (Pathak et al., 2017), and Dynamics Disagreement (DD) maximizes the variance of an ensemble of learned one-step dynamics models rint. = σ(ŝit+1) (Pathak et al.). Intrinsic Model Predictive Control Model Predictive Control (MPC) on a learned model has been used for control across a variety of simulated an real world settings (Wieber; Camacho & Alba, 2013), including recently with modelbased reinforcement learning (MBRL) algorithms (Williams et al., 2017; Chua et al., 2018; Lambert et al., 2019). MBRL using MPC is an iterative loop of learning a predictive model of environment dynamics fθ(⋅) (e.g. a one-step transition model), and acting in the environment through the use of model based planning with the learned model. This planning step usually involves optimizing for a sequence of actions that maximizes the expected future reward (Eqn. 1), for example, via sample based optimization; the MPC loop executes the first action of this sequence followed by replanning. a = argmax at∶t+τ τ ∑ t r(ŝt, at), s.t. ŝt+1 = fθ(st, at). (1) The reward function r defines the behavior of the planned sequence of actions in model-based planning. For task-specific RL this can be the task reward function (known or estimated from data). Instead, our Intrinsic MPC (IMPC) agent uses a curiosity based reward for planning in order to encourage task agnostic exploration by reasoning about what states are currently interesting and novel. The goal of planning being used to visit new interesting states, rather than states that were recorded with high intrinsic reward in the replay memory is shown in Fig. 2. This evaluation occurs by sampling action sequences, unrolling them using the forward dynamics model, scoring the rollouts with the learned curiosity model, and finally taking the first action of the sequence with the highest score. Given that this evaluation happens with access to only imagined states and a proposed action, only a subset of intrinsic models can be used with planning, as summarized in Tab. 1. We primarily evaluate IMPC using the RND curiosity model, but we also present results with the DD model. We use the Cross Entropy Method (CEM) (De Boer et al.), a sample based optimization procedure for planning. Inspired by prior work Byravan et al. (2021) we use a policy to generate action candidates for the planner; this policy is trained using the Maximum a-posteriori Policy Optimization (MPO) algorithm (Abdolmaleki et al., 2018) from data generated by the MPC actor. Additionally, to amortize the cost of planning we interleave planning with directly executing actions sampled from the learned policy. This is achieved by specifying a planning probability 0 < ρ < 1; at each step in the actor loop we choose either to plan or execute the policy action according to ρ (we use ρ = 0.9). Additional algorithmic details are included in Appendix A.1. 3.3 OFFLINE REINFORCEMENT LEARNING To train an agent offline from task-agnostic exploration data, we determine rewards from observations in hindsight. While the approach relies on the ability to compute rewards based on observations, a large set of tasks can be described in this manner (Li et al., 2020). In comparison to online learning, we have the benefit that we do not need to determine for which task data is most informative given a commensurate number of tasks. Instead we can relabel the data with all possible rewards to maximise its utility. Given the new task rewards, we replace the intrinsic reward in our trajectory data and apply a variant of a recent state-of-the-art offline RL algorithm, Critic Regularised Regression (CRR) Wang et al. (2020). While we apply CRR for our investigation, the overall method is general in that it could be applied with other approaches. We iteratively update critic and actor optimising their respective losses following Equation 2 and 3. LQ = EB[D(Qθ(st, at), (rt + γEat∼π(st+1)Qθ′(st+1, at)))], (2) Since we use a distributional categorical critic, we apply the divergence measure D instead of the squared Euclidean loss, following (Bellemare et al.). With f = ReLU(Â(st, at)) and  the advantage estimator via Qπ(st, at) − 1/m∑Ni=1Qπ(st, ai), the policy is optimized as: π(at∣st) = argmax π E(st,at)∼B[f(Qπ, π, st, at) log π(a∣s)]. (3) 4 EXPERIMENTS 4.1 EXPERIMENTAL SETTING Exploration Agents In this work we benchmark a variety of task-agnostic exploration agents. We classify agents as reactive, selecting actions with a policy, or planning, selecting actions by optimizing over trajectories. The reactive agents are trained with Maximum Apriori Optimization (MPO) (Abdolmaleki et al., 2018) and include the curiosity models RND, DD, ICM, and NS. We compare these to IMPC with DD and RND optimized with CEM and a fixed random agent that samples from the action distribution. Some figures will include a label of MPO, which corresponds to the benchmark of the task-aware RL agents, which provides interesting context to Explore2Offline. The online agent represents the state-of-the-art performance when the task is known a priori – matching it without an environment reward function would highlight the potential of Explore2Offline. Environments In this work, we investigate the exploration performance of a variety of DeepMind Control Suite tasks (Tassa et al., 2018). We evaluate 4 domains (Ball-in-Cup, Finger, Reacher, and Walker) including 14 tasks with a variety of state-action sizes, with further details included in the Appendix. In order to include more challenging environments for task-agnostic exploration, we use modifications proposed by Whitney et al. (2021), Explore Suite, which include constrained initial states and sparser reward functions. 4.2 TASK-AGNOSTIC DATA COLLECTION We compare the amount of task-reward received per-episode across a variety of agents and tasks, which is an intuitive metric for exploration agent performance but can only act as a proxy for exploration. To evaluate taskagnostic reward, the exploration agents are run without access to the environment rewards, with the rewards are relabelled after. The distributions of normalized reward achieved during the 5000 training episodes for four example tasks are documented for all of the exploration agents in Fig. 3. Crucially, the Explore Suite variants are challenging for the random agent across its lifetime, where all of the examples shown have median episode-reward of 0. There is a wide diversity in the agent-task pairings by proxy of measuring experienced reward, showcasing the large potential of future work to better understand this area. To give an overall view of data collection for each agent, we show in Fig. 4 (left) the Control and Explore Suite average collected reward across all tasks. The median, 90th and 10th percentiles are Finger, Turn-Easy Finger, Turn-Easy Explore Walker, Walk Walker, Walk Explore shown for each agent, combined across tasks. Here we also show what reward a task-aware agent (MPO) will collect in its lifetime. It is included to indicate an upper target for exploration, rather than a competitive baseline. An important artifact here is the random agent’s flatness of reward achieved across time – the other exploration agents show an increase in median reward as the dataset size grows (especially on Explore Suite). This change in reward distribution across dataset size indicates a diversity of behaviors in the intrinsic exploration agents, while the random agent receives reward from the same distribution repeatedly. This can be seen as a curiosity-based agent such as IMPC with RND achieves varied rewards and the random agent has a repeated reward distribution. While the collected reward can be a indicator of the usefulness of an exploration agent, it is not directly transferable to a task-focused policy capable of solving tasks. Without careful environment design, task-agnostic agents focusing on novel states will cover the entire state-space regardless of the predefined downstream task. 4.3 EVALUATING OFFLINE RL ON EXPLORATION DATA We study the performance of intrinsic agents via a SOTA offline reinforcement learning algorithm, Critic Regularized Regression (CRR) (Wang et al., 2020). For each agent, 3 policies were trained on dataset sizes within the range of 2 × 103 to 5 × 106 environment steps. The mean of the performance across all explore and control suite tasks with confidence intervals is shown in Fig. 4 (right). Due to the width of this study, we were only able to evaluate the offline RL performance on 3 seeds per each collected dataset – the median, max and min across these three policies from CRR are shown in each subplot. Here, we see three core findings that we will continue to detail: 1) for dataset sizes < 1 × 105, there is little benefit to task-aware learning for data collection and the random agent is a strong baseline versus other intrinsic model-based agents; 2) on larger datasets, the task-aware RL method, MPO, jumps ahead of the explorers, but the exploration agents all continue to improve in offline RL performance with more data; 3) the novel IMPC approach with RND, along with existing methods of MPO with RND or DD, performs best on average with the largest datasets. To showcase which tasks are solved by the Explore2Offline framework, we document the median performance of the exploration agents with the full 5 × 106 steps training set size versus the taskaware MPO in Tab. 2. Explore2Offline with these agents solve all but the Walker domain tasks, with further results included in the Appendix. To highlight why dataset size and observed reward are such powerful indicators of ORL performance, we show in Fig. 5 the correlation for the Finger Turn and Walker Walk tasks of the cumulative reward in a dataset versus the offline RL performance for that dataset. There is a clear trend of more reward resulting in a better policy for the tasks paired with the environment. Fig. 7 uses Spearman’s rank correlation to visualise how dataset size is a considerably better predictor of performance than any reward statistics including mean, sum or 80% quantile. This further emphasises the importance to transfer via increasingly large datasets instead instead of the final exploration policy. In the next section we will evaluate how re-using data from task-agnostic exploration can enable multi-task performance. 4.4 EVALUATING TASK-AGNOSTIC DATA FOR MULTITASK RL A key motivation for collecting task-agnostic data is its applicability when there are a variety of downstream tasks of interest, including those which might not be known at data collection time. In the ideal case, task-agnostic exploration could collect a single dataset, then offline RL could consume that data (along with relabeled rewards) to solve arbitrary tasks. We evaluate the quality of datasets collected by various exploration agents for use with multiple downstream tasks. For this evaluation we collected one dataset for the Pointmass and Finger environments using each of seven exploration agents, then trained policies for downstream tasks by relabeling the data with different reward functions. Each of the tasks is defined by a sparse +1 reward corresponding to a particular goal state. As a baseline, the Online agent collects experience using a standard online RL algorithm as it learns to solve a “Training” task, while all of the other data collection agents are task agnostic. The datasets collected by each agent are evaluated with offline RL on the “Training” task and three others: “Easy Transfer”, “Medium Transfer”, and “Hard Transfer”. These datasets are ranked in the level of challenge that a task-aware agent has in generalization. Tasks increase in challenge when goals require moving farther away from the training goal, which can be either travelling further in the same direction (often easier) or entirely misaligned (for harder generalization). Due to the dynamics of the systems, farther-away targets may be easier to discover, e.g. the “Medium Transfer” target for Pointmass lies in the corner of the arena. Full details of these environments, along with experiments on three more, are available in Appendix A.4. The performance of the exploration agents and the task-aware MPO agent are varied across the tasks as shown in Fig. 6. Depicted is the mean reward achieved of offline RL policies trained on 3 random seeds for each of 3 datasets of 5 × 106 transitions. While collecting data specifically for the target downstream task is the best option when the data will be used only on that task, the task-aware performance can degrade on even a slightly misaligned test-task when compared to task-agnostic counterparts. The potential for multitask transfer of exploration agents is highlighted, but further work is needed in more open ended environments to show the potential of Explore2Offline. 5 DISCUSSION Explore2Offline points to interesting directions for further understanding and utilizing task-agnostic exploration agents. To start, there are two trends that point to a need for further work on exploration methods. On average across our evaluation suite, the random agent performs very closely to the curiosity-based methods, and any particular exploration method varies substantially across task. The performance of the random agent suggests some similarities in the data collected by the random agent and the curiosity-based methods. As mentioned previously, curiosity-based methods are exhaustive (given enough time), and do not consider useful trends that may be common in downstream tasks. This shows a need for future exploration methods to be able to prioritize interesting subsets of a state-space and generalize across domains to create flexible agents. Although our evaluation demonstrates the potential of using offline RL on task-agnostic data, there is substantial variation across task-agent pairings with the chosen static offline RL algorithm (CRR). This variation needs to be studied in more detail to better understand the limitations posed by the algorithm, and differentiate them from the quality of the data itself. The intrinsic MPC agent can be progressed by utilizing it in other forms of deep RL evaluation. By transferring a learned dynamics model, this flexible exploration agent could also be evaluated as a task-aware agent (i.e. zero-shot learning of a new task) or in online RL by weighting the intrinsic reward model and the environment reward (i.e. better explore-exploit balance). 6 CONCLUSION We introduce Explore2Offline, a method for utilizing task-agnostic data for policy learning of unknown downstream tasks. We describe how an agent can be used to collect the requisite data once for solving multiple tasks, and demonstrate performance comparable to an online learning agent. Additionally, we show policies trained on task-agnostic data may be robust to variations to the initial task compared to task-aware learning, resulting in better transfer performance. Finally, data from the new exploration agent, Intrinsic Model Predictive Control, performs strongly across many tasks. As offline RL emerges as a useful tool in more domains, a deeper understanding of the required data for learning will be needed. Directions for future work include developing better exploration methods specifically for offline training, and identification of experiences with high information for effective datasets. A APPENDIX Here we include additional experimental context and results. A.1 ALGORITHMIC DETAILS A summary of the exploration algorithm, Intrinsic Model Predictive Control is shown in Alg. 1. We utilize a distributed setup where multiple actors and learners can be deployed concurrently. Algorithm 1 Intrinsic MPC Given: Randomly initialized proposal πθ , dynamics model mφ, reward model ri, random critic Qψ . {Modules to be learned} Given: Planning probability pplan, replay buffer B, MPO loss weight α, learning rates & optimizers (ADAM) for the different modules. {Known modules and parameters} {Exploration loop – Asynchronously on the actors} while True do Initialize ENV and observe state s0. while episode is not terminated do {Choose between intrinsic planner and random action depending on pplan} {Use learned proposal (πθ) as proposal for planner}. sample x ∼ U[0, 1] at ∼ { PLANNER(st, πθ,mφ, r) if x ≤ pplan πθ(st) otherwise. Step ENV(st, at) → (st+1) and write transition to replay buffer B end while end while {Asynchronously on the learner} while True do Sample batch B of trajectories, each of sequence length T from the replay buffer B Label rewards with reward model ri. Update action-value function Qψ based on B using Retrace (Munos et al., 2016). Update model mφ based on B using multi-step. Update reward model ri based on B. Update proposal πθ based on B using (Byravan et al., 2021)) end while The PLANNER subroutine takes in the current state st, the action proposal πθ, a dynamics model mφ that predicts next state st+1 given current state st and action at, the reward function model ri(st, at). Optionally, a learned state-value function Vψ(s) (with parameters ψ) that predicts the expected return from state s can be provided. We use the Cross-Entropy Method (CEM) (Botev et al.), shown in Alg. 2. Algorithm 2 CEM planner Given: state s0, action proposal πθ, dynamics model mφ, reward model ri, planning horizon H , number of samples S, elite fraction E, noise standard deviation σinit, and number of iterations I . {Rollout proposal distribution using the model.} (s0, a0, s1, . . . , sH) ← proposal(mφ, πθ, H) µ← [a0, a1, . . . , aH] {initial plan} σ ← σinit {Evaluate candidate action sequences open loop according to the model and compute associated returns.} for i = 1 . . . I do for k = 1 . . . S do pk ∼ N (µ, σ) {Sample candidate actions.} rk ←evaluate actions(mφ, pk, H, ri) end for Rank candidate sequences by reward and retain top E fraction. Compute mean µelite and per-dim standard deviation σelite based on the retained elite sequences. µ← (1 − αmean)µ + αmeanµelite {Update mean; αmean = 0.9} σ ← (1 − αstd)σ + αstdσelite {Update standard deviation; αstd = 0.5} end for return first action in µ A.2 ADDITIONAL ENVIRONMENT DETAILS The state-actions dimensions (ds, da) and descriptions for the environments used in this paper are detailed in Tab. 3. Additional collected reward distributions are shown in Fig. 11. A.3 FULL OFFLINE RL AGENT-TASK PERFORMANCES To supplement the results discussed in Sec. 4.3, we have included the performance per dataset size for all agents across all tasks. The results are shown in Fig. 13 and show the considerable variation when studying any given agent or task. There is substantially more variation across task than across agent, showing the value in continuing to fine-tune a set of tasks for benchmarking task-agnostic agents. The mean performance across all tasks is shown in Fig. 9, with a per-task breakdown shown in Table 4. A subset of agents and tasks are shown in Fig. 8 to show the convergence on a set of tasks. A.4 ADDITIONAL MULTI-TASK LEARNING EXPERIMENTS To compliment Sec. 4.4, we have included additional experiments for multi-task learning in the Reacher, Cheetah, and Walker environments, shown in Fig. 10. For these tasks, there is less clear of a benefit of using task-agnostic learning to generate data for offline RL policy generation. In our experience, this limited performance can be due to the fact that the environments are designed with specific behaviors and algorithms in mind, reducing the need for a diverse exploration method. A description of the starting state and the goal state for each task, as well as for the Pointmass and Finger environments from the main text, is available in Table 4. Dataset Size
1. What is the focus of the paper regarding reinforcement learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of exploration criteria and dataset size? 3. Do you have any concerns or questions regarding the paper's conclusions and their potential implications for practitioners? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any other works related to unsupervised exploration that the reviewer thinks should be discussed in the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper presents a study of a reinforcement learning setting consisting of a task-agnostic data collection phase and a task-aware offline optimization phases, on a set of continuous control tasks. Comparing the performance obtained when using different exploration criteria (combined with a new MPC-based method) in the data collection phase, as well as different dataset sizes, this work tries to draw conclusions about the features we should expect to find in offline RL datasets for a downstream algorithm to perform well. Strengths And Weaknesses Strengths: The problem addressed in the paper is of paramount importance: increasing our understanding of the kind of datasets that are needed to reach good performance with offline reinforcement learning has the potential to both shape the design of future offline RL algorithms and to have practical implications; The proposed MPC-based exploration method, which could be of independent interest, stands out as a clean way (probably cleaner than the reactive counterpart) of comparing the different exploration criteria. Despite being probably already known to some researchers in the field, having clearly written down some of the conclusions about the influence of dataset size and reward distribution could be useful to practitioners. Weaknesses: The paper only uses the CRR offline RL algorithm for evaluating the downstream effects of the construction of different types of datasets. Despite this is a start, the actual outlook of offline RL algorithms used in the community is very vast, and there is really no baseline as established as for in the online RL case. This severely limits the scope of the work, since the reader cannot know a priori whether the conclusions of the study only hold for that specific offline RL algorithm or are indeed more general than that; The paper determines dataset size is possibly the most important variable to predict the performance for a downstream offline RL algorithm. However, this could be a bit misleading: one could imagine having the same trajectory over and over again in the dataset and clearly increasing the dataset size without any increase in the downstream performance. Despite this is a trivial counterexample, I think it is really epistemologically important to pin down the effect of diversity, more than size, on the resulting policy, because that one is more likely to be the underlying cause of the observed performance increase. The comparison with previous and parallel work is not very insightful from the perspective of the reader. For instance, it would be nice to understand more the relationship with (Yarats et al., 2022), instead of just briefly mentioning its existence. Moreover, there is some work similar in flavour about pretraining (https://arxiv.org/abs/2106.04799): it seems important to have a discussion on how the study presented in this work is different compared to that. Clarity, Quality, Novelty And Reproducibility The paper is mostly clear, albeit some of the conclusions could be presented more directly. The quality of the work could be improved (are the results only relevant for a single offline RL algorithm? Is dataset size really the importance variable?) The work is relatively novel, despite it misses important discussion with respect to previous and parallel work in unsupervised exploration. There seem to be no apparent reproducibility issue (3 seeds are not great, but I understand the possible computational constraints the authors might have).
ICLR
Title Equivariant Shape-Conditioned Generation of 3D Molecules for Ligand-Based Drug Design Abstract Shape-based virtual screening is widely used in ligand-based drug design to search chemical libraries for molecules with similar 3D shapes yet novel 2D graph structures compared to known ligands. 3D deep generative models can potentially automate this exploration of shape-conditioned 3D chemical space; however, no existing models can reliably generate geometrically realistic drug-like molecules in conformations with a specific shape. We introduce a new multimodal 3D generative model that enables shape-conditioned 3D molecular design by equivariantly encoding molecular shape and variationally encoding chemical identity. We ensure local geometric and chemical validity of generated molecules by using autoregressive fragment-based generation with heuristic bonding geometries, allowing the model to prioritize the scoring of rotatable bonds to best align the growing conformation to the target shape. We evaluate our 3D generative model in tasks relevant to drug design including shape-conditioned generation of chemically diverse molecular structures and shape-constrained molecular property optimization, demonstrating its utility over virtual screening of enumerated libraries. 1 INTRODUCTION Generative models for de novo molecular generation have revolutionized computer-aided drug design (CADD) by enabling efficient exploration of chemical space, goal-directed molecular optimization (MO), and automated creation of virtual chemical libraries (Segler et al., 2018; Meyers et al., 2021; Huang et al., 2021; Wang et al., 2022; Du et al., 2022; Bilodeau et al., 2022). Recently, several 3D generative models have been proposed to directly generate low-energy or (bio)active molecular conformations using 3D convolutional networks (CNNs) (Ragoza et al., 2020), reinforcement learning (RL) (Simm et al., 2020a;b), autoregressive generators (Gebauer et al., 2022; Luo & Ji, 2022), or diffusion models (Hoogeboom et al., 2022). These methods have especially enjoyed accelerated development for structure-based drug design (SBDD), where models are trained to generate druglike molecules in favorable binding poses inside an explicit protein pocket (Drotár et al., 2021; Luo et al., 2022; Liu et al., 2022; Ragoza et al., 2022). However, SBDD requires atomically-resolved structures of a protein target, assumes knowledge of binding sites, and often ignores dynamic pocket flexibility, rendering these methods less effective in many CADD settings. Ligand-based drug design (LBDD) does not assume knowledge of protein structure. Instead, molecules are compared against previously identified “actives” on the basis of 3D pharmacophore or 3D shape similarity under the principle that molecules with similar structures should share similar activity (Vázquez et al., 2020; Cleves & Jain, 2020). In particular, ROCS (Rapid Overlay of Chemical Structures) is commonly used as a shape-based virtual screening tool to identify molecules with similar shapes to a reference inhibitor and has shown promising results for scaffold-hopping tasks (Rush et al., 2005; Hawkins et al., 2007; Nicholls et al., 2010). However, virtual screening relies on enumeration of chemical libraries, fundamentally restricting its ability to probe new chemical space. Here, we consider the novel task of generating chemically diverse 3D molecular structures conditioned on a molecular shape, thereby facilitating the shape-conditioned exploration of chemical space without the limitations of virtual screening (Fig. 1). Importantly, shape-conditioned 3D molecular generation presents unique challenges not encountered in typical 2D generative models: Challenge 1. 3D shape-based LBDD involves pairwise comparisons between two arbitrary conformations of arbitrary molecules. Whereas traditional property-conditioned generative models or MO algorithms shift learned data distributions to optimize a single scalar property, a shape-conditioned generative model must generate molecules adopting any reasonable shape encoded by the model. Challenge 2. Shape similarity metrics that compute volume overlaps between two molecules (e.g., ROCS) require the molecules to be aligned in 3D space. Unlike 2D similarity, the computed shape similarity between the two molecules will change if one of the structures is rotated. This subtly impacts the learning problem: if the model encodes the target 3D shape into an SE(3)-invariant representation, the model must learn how the generated molecule would fit the target shape under the implicit action of an SE(3)-alignment. Alternatively, if the model can natively generate an aligned structure, then the model can more easily learn to construct molecules that fit the target shape. Challenge 3. A molecule’s 2D graph topology and 3D shape are highly dependent; small changes in the graph can strikingly alter the shapes accessible to a molecule. It is thus unlikely that a generative model will reliably generate chemically diverse molecules with similar shapes to an encoded target without 1) simultaneous graph and coordinate generation; and 2) explicit shape-conditioning. Challenge 4. The distribution of shapes a drug-like molecule can adopt is chiefly influenced by rotatable bonds, the foremost source of molecular flexibility. However, existing 3D generative models are mainly developed using tiny molecules (e.g., fewer than 10 heavy atoms), and cannot generate flexible drug-like molecules while maintaining chemical validity (satisfying valencies), geometric validity (non-distorted bond distances and angles; no steric clashes), and chemical diversity. To surmount these challenges, we design a new generative model, SQUID1, to enable the shapeconditioned generation of chemically diverse molecules in 3D. Our contributions are as follows: • Given a 3D molecule with a target shape, we use equivariant point cloud networks to encode the shape into (rotationally) equivariant features. We then use graph neural networks (GNNs) to variationally encode chemical identity into invariant features. By mixing chemical features with equivariant shape features, we can generate diverse molecules in aligned poses that fit the shape. • We develop a sequential fragment-based 3D generation procedure that fixes local bond lengths and angles to prioritize the scoring of rotatable bonds. By massively simplifying 3D coordinate generation, we generate drug-like molecules while maintaining chemical and geometric validity. • We design a rotatable bond scoring network that learns how local bond rotations affect global shape, enabling our decoder to generate 3D conformations that best fit the target shape. We evaluate the utility of SQUID over virtual screening in shape-conditioned 3D molecular design tasks that mimic ligand-based drug design objectives, including shape-conditioned generation of diverse 3D structures and shape-constrained molecular optimization. To inspire further research, we note that our tasks could also be approached with a hypothetical 3D generative model that disentangles latent variables controlling 2D chemical identity and 3D shape, thus enabling zero-shot generation of topologically distinct molecules with similar shapes to any encoded target. 1SQUID: Shape-Conditioned Equivariant Generator for Drug-Like Molecules 2 RELATED WORK Fragment-based molecular generation. Seminal works in autoregressive molecular generation applied language models to generate 1D SMILES strings character-by-character (Gómez-Bombarelli et al., 2018; Segler et al., 2018), or GNNs to generate 2D molecular graphs atom-by-atom (Liu et al., 2018; Simonovsky & Komodakis, 2018; Li et al., 2018). Recent works construct molecules fragment-by-fragment to improve the chemical validity of intermediate graphs and to scale generation to larger molecules (Podda et al., 2020; Jin et al., 2019; 2020). Our fragment-based decoder is related to MoLeR (Maziarz et al., 2022), which iteratively generates molecules by selecting a new fragment (or atom) to add to the partial graph, choosing attachment sites on the new fragment, and predicting new bonds to the partial graph. Yet, MoLeR only generates 2D graphs; we generate 3D molecular structures. Beyond 2D generation, Flam-Shepherd et al. (2022) use an RL agent to generate 3D molecules by sampling and connecting molecular fragments. However, they sample from a small multiset of fragments, restricting the accessible chemical space. Powers et al. (2022) use fragments to generate 3D molecules inside a protein pocket, but only consider 7 distinct rings. Generation of drug-like molecules in 3D. In this work, we generate novel drug-like 3D molecular structures in free space, e.g., not conformers given a known molecular graph (Ganea et al., 2021; Jing et al., 2022). Myriad models have been proposed to generate small 3D molecules such as E(3)-equivariant normalizing flows and diffusion models (Satorras et al., 2022a; Hoogeboom et al., 2022), RL agents with an SE(3)-covariant action space (Simm et al., 2020b), and autoregressive generators that build molecules atom-by-atom with SE(3)-invariant internal coordinates (Luo & Ji, 2022; Gebauer et al., 2022). However, fewer 3D generative models can generate larger drug-like molecules for realistic chemical design tasks. Of these, Hoogeboom et al. (2022) and Arcidiacono & Koes (2021) fail to generate chemically valid molecules, while Ragoza et al. (2020) rely on postprocessing and geometry relaxation to extract stable molecules from their generated atom density grids. Only Roney et al. (2021) and Li et al. (2021), who develop autoregressive generators that simultaneously predict graph structure and internal coordinates, have shown to reliably generate valid drug-like molecules. We also couple graph generation with 3D coordinate prediction; however, we employ fragment-based generation with fixed local geometries to ensure local chemical and geometric validity. Futher, we focus on shape-conditioned molecular design; none of these works can natively address the aforementioned challenges posed by shape-conditioned molecular generation. Shape-conditioned molecular generation. Other works partially address shape-conditioned 3D molecular generation. Skalic et al. (2019) and Imrie et al. (2021) train networks to generate 1D SMILES strings or 2D molecular graphs conditioned on CNN encodings of 3D pharmacophores. However, they do not generate 3D structures, and the CNNs do not respect Euclidean symmetries. Zheng et al. (2021) use supervised molecule-to-molecule translation on SMILES strings for scaffold hopping tasks, but do not generate 3D structures. Papadopoulos et al. (2021) use REINVENT (Olivecrona et al., 2017) on SMILES strings to propose molecules whose conformers are shapesimilar to a target, but they must re-optimize the agent for each target shape. Roney et al. (2021) fine-tune a 3D generative model on the hits of a ROCS virtual screen of > 1010 drug-like molecules to shift the learned distribution towards a target shape. Yet, this expensive screening approach must be repeated for each new target. Instead, we seek to achieve zero-shot generation of 3D molecules with similar shapes to any encoded shape, without requiring fine-tuning or post facto optimization. Equivariant geometric deep learning on point clouds. Various equivariant networks have been designed to encode point clouds for updating coordinates in R3 (Satorras et al., 2022b), predicting tensorial properties (Thomas et al., 2018), or modeling 3D structures natively in Cartesian space (Fuchs et al., 2020). Especially noteworthy are architectures which lift scalar neuron features to vector features in R3 and employ simple operations to mix invariant and equivariant features without relying on expensive higher-order tensor products or Clebsh-Gordan coefficients (Deng et al., 2021; Jing et al., 2021). In this work, we employ Deng et al. (2021)’s Vector Neurons (VN)-based equivariant point cloud encoder VN-DGCNN to encode molecules into equivariant latent representations in order to generate molecules which are natively aligned to the target shape. Two recent works also employ VN operations for structure-based drug design and linker design (Peng et al., 2022; Huang et al., 2022). Huang et al. (2022) also build molecules in free space; however, they generate just a few atoms to connect existing fragments and do not condition on molecular shape. 3 METHODOLOGY Problem definition. We model a conditional distribution P (M |S) over 3D moleculesM = (G,G) with graph G and atomic coordinates G = {ra ∈ R3} given a 3D molecular shape S. Specifically, we aim to sample molecules M ′ ∼ P (M |S) with high shape similarity (simS(M ′,MS) ≈ 1) and low graph (chemical) similarity (simG(M ′,MS) < 1) to a target molecule MS with shape S. This scheme differs from 1) typical 3D generative models that learn P (M) without modeling P (M |S), and from 2) shape-conditioned 1D/2D generators that attempt to model P (G|S), the distribution of molecular graphs that could adopt shape S, but do not actually generate specific 3D conformations. We define graph (chemical) similarity simG ∈ [0, 1] between two molecules as the Tanimoto similarity computed by RDKit with default settings (2048-bit fingerprints). We define shape similarity sim∗S ∈ [0, 1] using Gaussian descriptions of molecular shape, modeling atoms a ∈ MA and b ∈ MB from molecules MA and MB as isotropic Gaussians in R3 (Grant & Pickup, 1995; Grant et al., 1996). We compute sim∗S using (2-body) volume overlaps between atom-centered Gaussians: sim∗S(GA,GB) = VAB VAA + VBB − VAB ; VAB = ∑ a∈A,b∈B Vab; Vab ∝ exp ( −α 2 ||ra−rb||2 ) , (1) where α controls the Gaussian width. Setting α = 0.81 approximates the shape similarity function used by the ROCS program (App. A.6). sim∗S is sensitive to SE(3) transformations of molecule MA with respect to moleculeMB . Thus, we define simS(MA,MB) = max R,t sim∗S(GAR+t,GB) as the shape similarity when MA is optimally aligned to MB . We perform such alignments with ROCS. Approach. At a high level, we model P (M |S) with an encoder-decoder architecture. Given a molecule MS = (GS ,GS) with shape S, we encode S (a point cloud) into equivariant features. We then variationally encode GS into atomic features, conditioned on the shape features. We then mix these shape and atom features to pass global SE(3) {in,equi}variant latent codes to the decoder, which samples new molecules from P (M |S). We autoregressively generate molecules by factoring P (M |S) = P (M0|S)P (M1|M0, S)...P (M |Mn−1, S), where each Ml = (Gl,Gl) are partial molecules defined by a BFS traversal of a tree-representation of the molecular graph (Fig. 2). Tree-nodes denote either non-ring atoms or rigid (ring-containing) fragments, and tree-links denote acyclic (rotatable, double, or triple) bonds. We generate Ml+1 by growing the graph Gl+1 around a focus atom/fragment, and then predict Gl+1 by scoring a query rotatable bond to best fit shape S. Simplifying assumptions. (1) We ignore hydrogens and only consider heavy atoms, as is common in molecular generation. (2) We only consider molecules with fragments present in our fragment library to ensure that graph generation can be expressed as tree generation. (3) Rather than generating all coordinates, we use rigid fragments, fix bond distances, and set bond angles according to hybridization heuristics (App. A.8); this lets the model focus on scoring rotatable bonds to best fit the growing conformer to the encoded shape. (4) We seed generation with M0 (the root tree-node), restricted to be a small (3-6 atoms) substructure from MS ; hence, we only model P (M |S,M0). 3.1 ENCODER Featurization. We construct a molecular graph G using atoms as nodes and bonds as edges. We featurize each node with the atomic mass; one-hot codes of atomic number, charge, and aromaticity; and one-hot codes of the number of single, double, aromatic, and triple bonds the atom forms (including bonds to implicit hydrogens). This helps us fix bond angles during generation (App. A.8). We featurize each edge with one-hot codes of bond order. We represent a shape S as a point cloud built by sampling np points from each of nh atom-centered Gaussians with (adjustable) variance σ2p. Fragment encoder. We also featurize each node with a learned embedding fi ∈ Rdf of the atom/fragment type to which that atom belongs, making each node “fragment-aware” (similar to MoLeR). In principle, fragments could be any rigid substructure with ≥ 2 atoms. Here, we specify fragments as ring-containing substructures without acyclic single bonds (Fig. 14). We construct a library Lf of atom/fragment types by extracting the top-k (k = 100) most frequent fragments from the dataset and adding these, along with each distinct atom type, to Lf (App. A.13). We then encode each atom/fragment in Lf with a simple GNN (App. A.12) to yield the global atom/fragment embeddings: {fi = ∑ a h (a) fi , {h(a)fi } = GNNLf (Gfi) ∀fi ∈ Lf}, where h (a) fi are per-atom features. Shape encoder. Given MS with nh heavy atoms, we use VN-DGCNN (App. A.11) to encode the molecular point cloud PS ∈ R(nhnp)×3 into a set of equivariant per-point vector features X̃p ∈ R(nhnp)×q×3. We then locally mean-pool the np equivariant features per atom: X̃p = VN-DGCNN(PS); X̃ = LocalPool(X̃p), (2) where X̃ ∈ Rnh×q×3 are per-atom equivariant representations of the molecular shape. Because VN operations are SO(3)-equivariant, rotating the point cloud will rotate X̃: X̃R = LocalPool(VN-DGCNN(PSR)). Although VN operations are strictly SO(3)-equivariant, we subtract the molecule’s centroid from the atomic coordinates prior to encoding, making X̃ effectively SE(3)-equivariant. Throughout this work, we denote SO(3)-equivariant vector features with tildes. Variational graph encoder. To model P (M |S), we first use a GNN (App. A.12) to encodeGS into learned atom embeddings H = {h(a) ∀ a ∈ GS}. We condition the GNN on per-atom invariant shape features X = {x(a)} ∈ Rnh×6q , which we form by passing X̃ through a VN-Inv (App. A.11): H = GNN((H0,X);GS); X = VN-Inv(X̃), (3) where H0 ∈ Rnh×(da+df ) are the set of initial atom features concatenated with the learned fragment embeddings, H ∈ Rnh×dh , and (·, ·) denotes concatenation in the feature dimension. For each atom in MS , we then encode h (a) µ ,h (a) log σ2 = MLP(h (a)) and sample h(a)var ∼ N(h(a)µ ,h(a)σ ): Hvar = { h(a)var = h (a) µ + ϵ (a) ⊙ h(a)σ ; h(a)σ = exp( 1 2 h (a) log σ2) ∀ a ∈ GS } , (4) where ϵ(a) ∼ N(0,1) ∈ Rdh , Hvar ∈ Rnh×dh , and ⊙ denotes elementwise multiplication. Here, the second argument of N(·, ·) is the standard deviation vector of the diagonal covariance matrix. Mixing shape and variational features. The variational atom features Hvar are insensitive to rotations of S. However, we desire the decoder to construct molecules in poses that are natively aligned to S (Challenge 2). We achieve this by conditioning the decoder on an equivariant latent representation of P (M |S) that mixes both shape and chemical information. Specifically, we mix Hvar with X̃ by encoding each h(a)var ∈ Hvar into linear transformations, which are applied atom-wise to X̃. We then pass the mixed equivariant features through a separate VN-MLP (App. A.11): X̃Hvar = { VN-MLP(W(a)H X̃ (a), X̃(a)); W (a) H = Reshape(MLP(h (a) var )) ∀ a ∈ GS } , (5) where W(a)H ∈ Rq ′×q , X̃(a) ∈ Rq×3, and X̃Hvar ∈ Rnh×dz×3. This maintains equivariance since W (a) H are rotationally invariant and W (a) H (X̃ (a)R) = (W (a) H X̃ (a))R for a rotation R. Finally, we sum-pool the per-atom features in X̃Hvar into a global equivariant representation Z̃ ∈ Rdz×3. We also embed a global invariant representation z ∈ Rdz by applying a VN-Inv to X̃Hvar , concatenating the output with Hvar, passing through an MLP, and sum-pooling the resultant per-atom features: Z̃ = ∑ a X̃ (a) Hvar ; z = ∑ a MLP(x(a)Hvar ,h (a) var ); x (a) Hvar = VN-Inv(X̃(a)Hvar). (6) 3.2 DECODER Given MS , we sample new molecules M ′ ∼ P (M |S,M0) by encoding PS into equivariant shape features X̃, variationally sampling h(a)var for each atom in MS , mixing Hvar with X̃, and passing the resultant (Z̃, z) to the decoder. We seed generation with a small structure M0 (extracted from MS), and build M ′ by sequentially generating larger structures M ′l+1 in a tree-like manner (Fig. 2). Specifically, we grow new atoms/fragments around a “focus” atom/fragment in M ′l , which is popped from a BFS queue. To generate M ′l+1 from M ′ l (e.g., grow the tree from the focus), we factor P (Ml+1|Ml, S) = P (Gl+1|Ml, S)P (Gl+1|Gl+1,Ml, S). Given (Z̃, z), we sample the new graphG′l+1 by iteratively attaching (a variable)C new atoms/fragments (children tree-nodes) around the focus, yielding G′(c)l for c = 1, ..., C, where G ′(C) l = G ′ l+1 and G ′(0) l = G ′ l. We then generate coordinates G′l+1 by scoring the (rotatable) bond between the focus and its parent tree-node. New bonds from the focus to its children are left unscored in M ′l+1 until the children become “in focus”. Partial molecule encoder. Before bonding each new atom/fragment to the focus (or scoring bonds), we encode the partial molecule M ′(c−1)l with the same scheme as for MS (using a parallel encoder; Fig. 2), except we do not variationally embed H′.2 Instead, we process H′ analogously to Hvar. Further, in addition to globally pooling the per-atom embeddings to obtain Z̃′ = ∑ a X̃ ′(a) H and z′ = ∑ a x ′(a) H , we also selectively sum-pool the embeddings of the atom(s) in focus, yielding Z̃′foc = ∑ a∈focus X̃ ′(a) H and z ′ foc = ∑ a∈focus x ′(a) H . We then align the equivariant representations of M ′(c−1) l and MS by concatenating Z̃, Z̃ ′, Z̃− Z̃′, and Z̃′foc and passing these through a VN-MLP: Z̃dec = VN-MLP(Z̃, Z̃′, Z̃− Z̃′, Z̃′foc). (7) Note that Z̃dec ∈ Rq×3 is equivariant to rotations of the overall system (M ′(c−1)l ,MS). Finally, we form a global invariant feature zdec ∈ Rddec to condition graph (or coordinate) generation: zdec = (VN-Inv(Z̃dec), z, z′, z− z′, z′foc). (8) Graph generation. We factor P (Gl+1|Ml, S) into a sequence of generation steps by which we iteratively connect children atoms/fragments to the focus until the network generates a (local) stop token. Fig. 2 sketches a generation sequence by which a new atom/fragment is attached to the focus, yielding G′(c)l from G ′(c−1) l . Given zdec, the model first predicts whether to stop (local) generation via p∅ = sigmoid(MLP∅(zdec)) ∈ (0, 1). If p∅ ≥ τ∅ (a threshold, App. A.16), we stop and proceed to bond scoring. Otherwise, we select which atom afoc on the focus (if multiple) to grow from: pfocus = softmax({MLPfocus(zdec,x′(a)H ) ∀ a ∈ focus}). (9) The decoder then predicts which atom/fragment fnext ∈ Lf to connect to the focus next: pnext = softmax({MLPnext(zdec,x′(afoc)H , ffi) ∀ fi ∈ Lf}). (10) 2We have dropped the (c) notation for clarity. However, each Zdec is specific to each (M ′(c−1) l ,MS) system. If the selected fnext is a fragment, we predict the attachment site asite on the fragment Gfnext : psite = softmax({MLPsite(zdec,x′(afoc)H , fnext,h (a) fnext ) ∀ a ∈ Gfnext}), (11) where h(a)fnext are the encoded atom features for Gfnext . Lastly, we predict the bond order (1 ◦, 2◦, 3◦) via pbond = softmax(MLPbond(zdec,x ′(afoc) H , fnext,h (asite) fnext )). We repeat this sequence of steps until p∅ ≥ τ∅, yielding Gl+1. At each step, we greedily select the action after masking actions that violate known chemical valence rules. After each sequence, we bond a new atom or fragment to the focus, giving G′(c)l . If an atom, the atom’s position relative to the focus is fixed by heuristic bonding geometries (App. A.8). If a fragment, the position of the attachment site is fixed, but the dihedral of the new bond is yet unknown. Thus, in subsequent generation steps we only encode the attachment site and mask the remaining atoms in the new fragment until that fragment is “in focus” (Fig. 2). This means that prior to bond scoring, the rotation angle of the focus is random. To account for this when training (with teacher forcing), we randomize the focal dihedral when encoding eachM ′(c−1)l . Scoring rotatable bonds. After sampling G′l+1 ∼ P (Gl+1|M ′l , S), we generate G′l+1 by scoring the rotation angle ψ′l+1 of the bond connecting the focus to its parent node in the generation tree (Fig. 2). Since we ultimately seek to maximize simS(M ′,MS), we exploit the fact that our model generates shape-aligned structures to predict max ψ′l+2,ψ ′ l+3,... sim∗S(G ′(ψfoc),GS) for various query di- hedrals ψ′l+1 = ψfoc of the focus rotatable bond in a supervised regression setting. Intuitively, the scorer is trained to predict how the choice of ψfoc affects the maximum possible shape similarity of the final molecule M ′ to the target MS under an optimal policy. App. A.2 details how regression targets are computed. During generation, we sweep over each query ψfoc ∈ [−π, π), encode each resultant structure M ′(ψfoc) l+1 into z (ψfoc) dec, scorer 3, and select the ψfoc that maximizes the predicted score: ψ′l+1 = argmax ψfoc sigmoid(MLPscorer(z (ψfoc) dec, scorer)). (12) At generation time, we also score chirality by enumerating stereoisomers Gχfoc ∈ G′foc of the focus and selecting the (Gχfoc, ψfoc) that maximizes Eq. 12 (App. A.2). Training. We supervise each step of graph generation with a multi-component loss function: Lgraph-gen = L∅+Lfocus+Lnext+Lsite+Lbond+βKLLKL+βnext-shapeLnext-shape+β∅-shapeL∅-shape. (13) L∅, Lfocus, Lnext, and Lbond are standard cross-entropy losses. Lsite = − log( ∑ a p (a) site I[ca > 0]) is a modified cross-entropy loss that accounts for symmetric attachment sites in the fragmentsGfi ∈ Lf , where p(a)site are the predicted attachment-site probabilities and ca are multi-hot class probabilities. LKL is the KL-divergence between the learned N (hµ,hσ) and the prior N (0,1). We also employ two auxiliary losses Lnext-shape and L∅-shape in order to 1) help the generator distinguish between incorrect shape-similar (near-miss) vs. shape-dissimilar fragments, and 2) encourage the generator to generate structures that fill the entire target shape (App. A.10). We train the rotatable bond scorer separately from the generator with an MSE regression loss. See App. A.15 for training details. 4 EXPERIMENTS Dataset. We train SQUID with drug-like molecules (up to nh = 27) from MOSES (Polykovskiy et al., 2020) using their train/test sets. Lf includes 100 fragments extracted from the dataset and 24 atom types. We remove molecules that contain excluded fragments. For remaining molecules, we generate a 3D conformer with RDKit, set acyclic bond distances to their empirical means, and fix acyclic bond angles using heuristic rules. While this 3D manipulation neglects distorted bonding geometries in real molecules, the global shapes are marginally impacted, and we may recover refined geometries without seriously altering the shape (App. A.8). The final dataset contains 1.3M 3D molecules, partitioned into 80/20 train/validation splits. The test set contains 147K 3D molecules. 3We train the scorer independently from the graph generator, but with a parallel architecture. Hence, zdec ̸= zdec, scorer. The main architectural difference between the two models (graph generator and scorer) is that we do not variationally encode Hscorer into Hvar,scorer, as we find it does not impact empirical performance. In the following experiments, we only consider molecules MS for which we can extract a small (3-6 atoms) 3D substructure M0 containing a terminal atom, which we use to seed generation. In principle,M0 could include larger structures fromMS , e.g., for scaffold-constrained tasks. Here, we use the smallest substructures to ensure that the shape-conditioned generation tasks are not trivial. Shape-conditioned generation of chemically diverse molecules. “Scaffold-hopping”—designing molecules with high 3D shape similarity but novel 2D graph topology compared to known inhibitors—is pursued in LBDD to develop chemical lead series, optimize drug activity, or evade intellectual property restrictions (Hu et al., 2017). We imitate this task by evaluating SQUID’s ability to generate molecules M ′ with high simS(M ′,MS) but low simG(M ′,MS). Specifically, for 1000 molecules MS with target shapes S in the test set, we use SQUID to generate 50 molecules per MS . To generate chemically diverse species, we linearly interpolate between the posterior N(hµ,hσ) and the prior N(0,1), sampling each hvar ∼ N((1− λ)hµ, (1− λ)hσ + λ1) using either λ = 0.3 or λ = 1.0 (prior). We then filter the generated molecules to have simG(M ′,MS) < 0.7, or < 0.3 to only evaluate molecules with substantial chemical differences compared to MS . Of the filtered molecules, we randomly choose Nmax samples and select the sample with highest simS(M ′,MS). Figure 3A plots distributions of simS(M ′,MS) between the selected molecules and their respective target shapes, using different sampling (Nmax = 1, 20) and filtering (simG(M ′,MS) < 0.7, 0.3) schemes. We compare against analogously sampling random 3D molecules from the training set. Overall, SQUID generates diverse 3D molecules that are quantitatively enriched in shape similarity compared to molecules sampled from the dataset, particularly for Nmax = 20. Qualitatively, the molecules generated by SQUID have significantly more atoms which directly overlap with the atoms of MS , even in cases where the computed shape similarity is comparable between SQUIDgenerated molecules and molecules sampled from the dataset (Fig. 3C). We quantitatively explore this observation in App. A.7. We also find that using λ = 0.3 yields greater simS(M ′,MS) than λ = 1.0, in part because using λ = 0.3 yields less chemically diverse molecules (Fig. 3B; Challenge 3). Even so, sampling Nmax = 20 molecules from the prior with simG(M ′,MS) < 0.3 still yields more shape-similar molecules than sampling Nmax = 500 molecules from the dataset. We emphasize that 99% of samples from the prior are novel, 95% are unique, and 100% are chemically valid (App. A.4). Moreover, 87% of generated structures do not have any steric clashes (App. A.4), indicating that SQUID generates realistic 3D geometries of the flexible drug-like molecules. Ablating equivariance. SQUID’s success in 3D shape-conditioned molecular generation is partly attributable to SQUID aligning the generated structures to the target shape in equivariant feature space (Eq. 7), which enables SQUID to generate 3D structures that fit the target shape without having to implicitly learn how to align two structures in R3 (Challenge 2). We explicitly validate this design choice by setting Z̃ = 0 in Eq. 7, which prevents the decoder from accessing the 3D orientation of MS during training/generation. As expected, ablating SQUID’s equivariance reduces the enrichment in shape similarity (relative to the dataset baseline) by as much as 33% (App. A.9). Shape-constrained molecular optimization. Scaffold-hopping is often goal-directed; e.g., aiming to reduce toxicity or improve bioactivity of a hit compound without altering its 3D shape. We mimic this shape-constrained MO setting by applying SQUID to optimize objectives from GaucaMol (Brown et al., 2019) while preserving high shape similarity (simS(M,MS) ≥ 0.85) to various “hit” 3D molecules MS from the test set. This task considerably differs from typical MO tasks, which optimize objectives without constraining 3D shape and without generating 3D structures. To adapt SQUID to shape-constrained MO, we implement a genetic algorithm (App. A.5) that iteratively mutates the variational atom embeddings Hvar of encoded seed molecules (“hits”) MS in order to generate 3D molecules M∗ with improved objective scores, but which still fit the shape of MS . Table 1 reports the optimized top-1 scores across 6 objectives and 8 seed molecules MS (per objective, sampled from the test set), constrained such that simS(M∗,MS) ≥ 0.85. We compare against the score ofMS , as well as the (shape-constrained) top-1 score obtained by virtual screening (VS) our training dataset (>1M 3D molecules). Of the 8 seeds MS per objective, 3 were selected from top-scoring molecules to serve as hypothetical “hits”, 3 were selected from top-scoring large molecules (≥ 26 heavy atoms), and 2 were randomly selected from all large molecules. In 40/48 tasks, SQUID improves the objective score of the seed MS while maintaining simS(M∗,MS) ≥ 0.85. Qualitatively, SQUID optimizes the objectives through chemical alterations such as adding/deleting individual atoms, switching bonding patterns, or replacing entire substructures – all while generating 3D structures that fit the target shape (App. A.5). In 29/40 of successful cases, SQUID (limited to 31K samples) surpasses the baseline of virtual screening 1M molecules, demonstrating the ability to efficiently explore new shape-constrained chemical space. 5 CONCLUSION We designed a novel 3D generative model, SQUID, to enable shape-conditioned exploration of chemically diverse molecular space. SQUID generates realistic 3D geometries of larger molecules that are chemically valid, and uniquely exploits equivariant operations to construct conformations that fit a target 3D shape. We envision our model, alongside future work, will advance creative shape-based drug design tasks such as 3D scaffold hopping and shape-constrained 3D ligand design. REPRODUCIBILITY STATEMENT We have taken care to facilitate the reproduciblility of this work by detailing the precise architecture of SQUID throughout the main text; we also provide extensive details on training protocols, model parameters, and further evaluations in the Appendices. Our source code can be found at https://github.com/keiradams/SQUID. Beyond the model implementation, our code includes links to access our datasets, as well as scripts to process the training dataset, train the model, and evaluate our trained models across the shape-conditioned generation and shape-constrained optimization tasks described in this paper. ETHICS STATEMENT Advancing the shape-conditioned 3D generative modeling of drug-like molecules has the potential to accelerate pharmaceutical drug design, showing particular promise for drug discovery campaigns involving scaffold hopping, hit expansion, or the discovery of novel ligand analogues. However, such advancements could also be exploited for nefarious pharmaceutical research and harmful biological applications. ACKNOWLEDGMENTS This research was supported by the Office of Naval Research under grant number N00014-21-12195. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 2141064. The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the research results reported within this poster. The authors thank Rocı́o Mercado, Sam Goldman, Wenhao Gao, and Lagnajit Pattanaik for providing helpful suggestions regarding the content and presentation of this paper. A APPENDIX CONTENTS A Appendix 15 A.1 Overview of definitions, terms, and notations . . . . . . . . . . . . . . . . . . . . 16 A.2 Scoring rotatable bonds and stereochemistry . . . . . . . . . . . . . . . . . . . . . 18 A.3 Random examples of generated 3D molecules . . . . . . . . . . . . . . . . . . . . 20 A.4 Generation statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 A.5 Shape-constrained molecular optimization . . . . . . . . . . . . . . . . . . . . . . 24 A.5.1 Genetic algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 A.5.2 Visualization of optimized molecules . . . . . . . . . . . . . . . . . . . . 24 A.6 Comparing simS to ROCS scoring function . . . . . . . . . . . . . . . . . . . . . 27 A.7 Exploring different values of α in simS . . . . . . . . . . . . . . . . . . . . . . . 28 A.8 Heuristic bonding geometries and their impact on global shape . . . . . . . . . . . 29 A.9 Ablating equivariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 A.10 Auxiliary training losses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 A.11 Overview of Vector Neurons (VN) operations . . . . . . . . . . . . . . . . . . . . 33 A.12 Graph neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 A.13 Fragment library. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 A.14 Model parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 A.15 Additional training details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 A.16 Additional generation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 A.17 Relaxation of generated geometries . . . . . . . . . . . . . . . . . . . . . . . . . . 43 A.18 Comparison to LigDream (Skalic et al., 2019) . . . . . . . . . . . . . . . . . . . . 45 A.1 OVERVIEW OF DEFINITIONS, TERMS, AND NOTATIONS A.2 SCORING ROTATABLE BONDS AND STEREOCHEMISTRY Recall that our goal is to train the scorer to predict max ψ′l+2,ψ ′ l+3,... sim∗S(G ′(ψfoc),GS) for various query dihedrals ψ′l+1 = ψfoc. That is, we wish to predict the maximum possible shape similarity of the final molecule M ′ to MS when fixing ψ′l+1 = ψfoc and optimally rotating all the yet-to-be-scored (or generated) rotatable bond dihedrals ψ′l+2, ψ ′ l+3, ... so as to maximize sim ∗ S(G ′(ψfoc),GS). Training. We train the scorer independently from the graph generator (with a parallel architecture) using a mean squared error loss between the predicted scores ŝ(ψfoc)dec, scorer = sigmoid(MLP(z (ψfoc) dec, scorer)) and the regression targets s(ψfoc) for Ns different query dihedrals ψfoc ∈ [−π, π): Lscorer = 1 Ns Ns∑ i=1 (s(ψ (i) foc ) − ŝ(ψ (i) foc ) dec, scorer) 2 (14) Computing regression targets. When training with teacher forcing (M ′l = MSl , G′ = GS), we compute regression targets sψfoc ≈ max ψl+2,ψl+3,... sim∗S(G ′(ψfoc),GS) by setting the focal dihedral ψl+1 = ψfoc, sampling Nψ conformations of the “future” graph GTfoc induced by the subtree Tfoc whose root (sub)tree-node is the focus, and computing sψfoc = max i=0,...,Nψ sim∗S(G (i) Tfoc ,GSTfoc ;α = 2.0). Since we fix bonding geometries, we need only sample Nψ sets of dihedrals of the rotatable bonds in GSTfoc to sample Nψ conformers, making this conformer enumeration very fast. Note that rather than using α = 0.81 in these regression targets, we use α = 2.0 to make the scorer more sensitive to shape differences (App. A.7). When computing regression targets, we use Nψ < 1800 and select 36 (evenly spaced)ψfocus ∈ [−π, π) per rotatable bond. Figure 4 visualizes how regression targets are computed. App. A.15 contains further training specifics. Scoring stereochemistry. At generation time, we also enumerate all possible stereoisomers of the focus (except cis/trans bonds) and score each stereoisomer separately, ultimately selecting the (stereoisomer, ψfoc) pair that maximizes the predicted score. Figure 5 illustrates how we enumerate stereoisomers. Note that although we use the learned scoring function to score stereoisomerism at generation time, we do not explicitly train the scorer to score different stereoisomers. Masking severe steric clashes. At generation time, we do not score any query dihedral ψfoc that causes a severe steric clash (< 1Å) with the existing partially generated structure (unless all query dihedrals cause a severe clash). A.3 RANDOM EXAMPLES OF GENERATED 3D MOLECULES Figures 6 and 7 show additional random examples of molecules generated by SQUID when sampling Nmax = 1, 20 molecules with simG(M ′,MS) < 0.7 from the prior (λ = 1.0) or λ = 0.3 and selecting the sample with the highest simS(M ′,MS). Note that the visualized poses of the generated conformers are those which are directly generated by SQUID; the generated conformers have not been explicitly aligned to MS (e.g., using ROCS). Even so, the conformers are (for the most part) aligned toMS since SQUID’s equivariance enables the model to generate natively aligned structures. It is apparent in these examples that using larger Nmax yields molecules with significantly improved shape similarity to MS , both qualitatively and quantitatively. This is in part caused by: 1) stochasticity in the variationally sampled atom embeddings Hvar; 2) stochasticity in the input molecular point clouds, which are sampled from atom-centered isotropic Gaussians in R3; 3) sampling sets of variational atom embeddings that may not be entirely self-consistent (e.g., for instance, if we sample only 1 atom embedding that implicitly encodes a ring structure); and 4) the choice of τ∅, the threshold for stopping local generation. While a small τ∅ (we use τ∅ = 0.01) helps prevent the model from adding too many atoms or fragments around a single focus, a small τ∅ can also lead to early (local) stoppage, yielding molecules that do not completely fill the target shape. By sampling more molecules (using largerNmax), we have more chances to avoid these adverse random effects. Further work will attempt to improve the robustness of the encoding scheme and generation procedure in order to increase SQUID’s overall sample efficiency. A.4 GENERATION STATISTICS Table 4 reports the percentage of molecules that are chemically valid, novel, and unique when sampling 50 molecules from the prior (λ = 1.0) for 1000 encoded molecules MS (e.g., target shapes) from the test set, yielding a total of 50K generated molecules. We define chemical validity as passing RDKit sanitization. Since we directly generate the molecular graph and mask actions which violate chemical valency, 100% of generated molecules are valid. We define novelty as the percentage of generated molecules whose molecular graphs are not present in the training data. We define uniqueness as the percentage of generated molecular graphs (of the 50K total) that are only generated once. For novelty and uniqueness calculations, we consider different stereoisomers to have the same molecular graph. We also report the percentage of generated 3D structures that have an apparent steric clash, defined to be a non-bonded interatomic distance below 2Å. When sampling from the prior (λ = 1.0), the average internal chemical similarity of the generated molecules is 0.26± 0.04. When sampling with λ = 0.3, the average internal chemical similarity is 0.32 ± 0.07. We define internal chemical similarity to be the average pairwise chemical similarity (Tanimoto fingerprint similarity) between molecules that are generated for the same target shape. Table 5 reports the graph reconstruction accuracy when sampling 3D molecules from the posterior (λ = 0.0), for 1000 target molecules MS from the test set. We report the top-k graph reconstruction accuracy (ignoring stereochemical differences) when sampling k = 1 molecule per encoded MS , and when sampling k = 20 molecules per encoded MS . Since we have intentionally trained SQUID inside a shape-conditioned variational autoencoder framework in order to generate chemically diverse molecules with similar 3D shapes, the significance of graph reconstruction accuracy is debatable in our setting. However, it is worth noting that the top-1 reconstruction accuracy is 16.3%, while the top-20 reconstruction accuracy is much higher (57.2%). This large difference is likely attributable to both stochasticity in the variational atom embeddings and stochasticity in the input 3D point clouds. A.5 SHAPE-CONSTRAINED MOLECULAR OPTIMIZATION A.5.1 GENETIC ALGORITHM We adapt SQUID to shape-constrained molecular optimization by implementing a genetic algorithm on the variational atom embeddings Hvar. Algorithm 1 details the exact optimization procedure. In summary, given the seed molecule MS with a target 3D shape and an initial substructure M0 (which is contained by all generated molecules for a given MS), we first generate an initial population of generated molecules M ′ by repeatedly sampling Hvar for various interpolation factors λ, mixing these Hvar with the encoded shape features of MS , and decoding new 3D molecules. We only add a generated molecule to the population if simS(M ′,MS) ≥ τS (we use τS = 0.75), so that the GA does not overly explore regions of chemical space that have no chance of satisfying the ultimate constraint simS(M ′,MS) ≥ 0.85. After generating the initial population, we iteratively 1) select the top-scoring samples in the population, 2) cross the top-scoring Hvar in crossover events, 3) mutate the top and crossed Hvar via adding random noise, and 4) generate new molecules M ′ for each mutated Hvar. The final optimized molecule M∗ is the top-scoring generated molecule that satisfies the shape-similarity constraint simS(M ′,MS) ≥ 0.85. A.5.2 VISUALIZATION OF OPTIMIZED MOLECULES Figure 8 visualizes the structures of the SQUID-optimized molecules M∗ and their respective seed molecules MS (e.g., the starting “hit” molecules with target shapes) for each of the optimization tasks which led to an improvement in the objective score. We also overlay the generated 3D conformations of M∗ on those of MS , and report the objective scores for each M∗ and MS . Algorithm 1 Genetic algorithm for shape-constrained optimization with SQUID Given: MS with nH heavy atoms,M0, objective oracle O Params: τS , τG, Ne, NT , Nc ▷ Defaults: τS = 0.75, τG = 0.95, Ne = 20, NT = 20, Nc = 10 Hµ,Hσ = Encode(MS) ▷ Encode target molecule Initialize population P = {(MS ,Hµ)} for λ ∈ [0.0, 0.2, 0.4, 0.6, 0.8, 1.0] do ▷ Create initial population of (M ′,Hvar) for i = 1, ..., 100 do Sample noise ϵ ∈ R(nH×dh) ∼ N(0,1) Hvar = (1− λ)Hµ + ϵ⊙ ((1− λ)Hσ + λ1) ▷ Mutate variational atom embeddings z, Z̃ = Encode(MS ;Hvar) ▷ Mix mutated chemical and shape information M ′ = Decode(M0, z, Z̃) ▷ Generate mutated molecule M ′ Compute simS(M ′,MS) if simS(M ′,MS) >= τS then ▷ Add M ′ to population only if simS(M ′,MS) is high Add (M ′,Hvar) to P end if end for end for for e = 1, ..., Ne do ▷ For each evolution Construct Psorted by sorting P by O(M) ▷ Sort population by objective score, high to low. Initialize TM = {}, THvar = {} for (M,Hvar) ∈ Psorted do ▷ Collect top-NT scoring (M ′,Hvar) if (simG(M,MT ) < τG ∀ MT ∈ TM ) and (|T | < NT ) then Add M to TM Add Hvar to THvar end if end for Initialize TC = {} for c = 1, ..., Nc do ▷ Add crossovers to set of top-scoring Hvar Sample Hi ∈ THvar , Hj ̸=i ∈ THvar Hc = CROSS(Hi,Hj) ▷ Cross by randomly swapping half of the atom embeddings Add Hc to TC end for THvar = THvar ∪ TC for Hvar ∈ THvar do ▷ Adding to population for λ ∈ [0.0, 0.2, 0.4, 0.6, 0.8, 1.0] do for i = 1, ..., 10 do Sample noise ϵ ∈ R(nH×dh) ∼ N(0,1) Hvar = (1− λ)Hvar + ϵ ▷ Mutate variational atom embeddings z, Z̃ = Encode(MS ;Hvar) ▷ Mix mutated chemical and shape information M ′ = Decode(M0, z, Z̃) ▷ Generate mutated molecule M ′ Compute simS(M ′,MS) if simS(M ′,MS) >= τS then ▷ Add M ′ to P only if simS(M ′,MS) is high Add (M ′,Hvar) to P end if end for end for end for end for return M∗ = argmax M ′∈P O(M ′) subject to simS(M ′,MS) >= 0.85 A.6 COMPARING SIMS TO ROCS SCORING FUNCTION Our shape similarity function described in Equation 1 closely approximates the shape (only) scoring function employed by ROCS, when α = 0.81. Figure 9 demonstrates the near-perfect correlation between our computed shape scores and those computed by ROCS for 50,000 shape comparisons, with a mean absolute error of 0.0016. Note that Equation 1 computes non-aligned shape similarity. We still employ ROCS to align the generated molecules M ′ to the target molecule MS before computing their (aligned) shape similarity in our experiments. However, we do not require explicit alignment when training SQUID; we do not use the commercial ROCS program during training. A.7 EXPLORING DIFFERENT VALUES OF α IN SIMS Our analysis of shape similarity thus far has used Equation 1 with α = 0.81 in order to recapitulate the shape similarity function used by ROCS, which is widely used in drug discovery. However, compared to randomly sampled molecules in the dataset, the molecules generated by SQUID qualitatively appear to do a significantly better job at fitting the target shape S on an atom-by-atom basis, even if the computed shape similarities (with α = 0.81) are comparable (see examples in Figure 3). We quantify this observation by increasing the value of α when computing simS(M ′,MS ;α) for generated molecules M ′, as α is inversely related to the width of the isotropic 3D Gaussians used in the volume overlap calculations in Equation 1. Intuitively, increasing α will greater penalize simS if the atoms of M ′ and MS do not perfectly align. Figure 10 plots the mean simS(M,MS ;α) for the most shape-similar molecule M of Nmax sampled molecules M ′ for increasing values of α. Averages are calculated over 1000 target molecules MS from the test set, and we only consider generated molecules for which simG(M ′,MS) < 0.7. Crucially, the gap between the mean simS(M,MS ;α) obtained by generating molecules with SQUID vs. randomly sampling molecules from the dataset significantly widens with increasing α. This effect is especially apparent when using SQUID with λ = 0.3 and Nmax = 20, although can be observed with other generation strategies as well. Hence, SQUID does a much better job at generating (still chemically diverse) molecules that have significant atom-to-atom overlap with MS . A.8 HEURISTIC BONDING GEOMETRIES AND THEIR IMPACT ON GLOBAL SHAPE In all molecules (dataset and generated) considered in this work, we fix acyclic bond distances to their empirical averages and set acyclic bond angles to heuristic values based on hybridization rules in order to reduce the degrees of freedom in 3D coordinate generation. Here, we describe how we fix these bonding geometries and explore whether this local 3D structure manipulation significantly alters the global molecular shape. Fixing bonding geometries. We fix acyclic bond distances by computing the mean bond distance between pairs of atom types across all the RDKit-generated conformers in our training set. After collecting these empirical mean values, we manually set each acyclic bond distance to its respective mean value for each conformer in our datasets. We set acyclic bond angles using simple hybridization rules. Specifically, sp3-hybridized atoms will have bond angles of 109.5◦, sp2-hybridized atoms will have bond angles of 120◦, and sp-hybridized atoms will have bond angles of 180◦. We manually fix the acyclic bond angles to these heuristic values for all conformers in our datasets. We use RDKit to determine the hybridization states of each atom. During generation, occasionally the hybridization of certain atoms (N, O) may change once they are bonded to new neighbors. For instance, an sp3 nitrogen can become sp2 once bonded to an aromatic ring. We adjust bond angles on-the-fly in these edge cases. Impact on global shape. Figure 11 plots the histogram of simS(Mfixed,Mrelaxed) for 1000 test set conformers Mfixed whose bonding geometries have been fixed, and the original RDKitgenerated conformers Mrelaxed with relaxed (true) bonding geometries. In the vast majority of cases, fixing the bonding geometries negligibly impacts the global shape of the 3D molecule (simS(Mfixed,Mrelaxed) ≈ 1). This is because the main factor influencing global molecular shape is rotatable bonds (e.g., flexible dihedrals), which are not altered by fixing bond distances and angles. Recovering refined bonding geometries. Even though fixing bond distances and angles only marginally impacts molecular shape, we still may wish to recover refined bonding geometries of the generated 3D molecules without altering the generated 3D shape. We can accomplish this (to a first approximation) for generated molecules by creating a geometrically relaxed conformation of the generated molecular graph with RDKit, and then manually setting the dihedrals of the rotatable bonds in the relaxed conformer to match the corresponding dihedrals in the generated conformers. Importantly, if we perform this relaxation procedure for both the dataset molecules and the SQUID-generated molecules, the (relaxed) generated molecules still have significantly enriched shape-similarity to the (relaxed) target shape compared to (relaxed) random molecules from the dataset (Fig. 12). A.9 ABLATING EQUIVARIANCE SQUID aligns the equivariant representations of the encoded target shape and the partially generated structures in order to generate 3D conformations that natively fit the target shape, without having to implicitly learn SE(3)-alignments (Challenge 2). We achieve this in Equation 7, where we mix the equivariant representations of MS and the partially generated structure M ′(c−1) l . To empirically motivate this design choice, we ablate the equivariant alignment by setting Z̃ = 0 in Eq. 7. We denote this ablated model as SQUID-NoEqui. Note that because we still pass the unablated invariant features z to the decoder (Eq. 8), SQUID-NoEqui is still conditioned on the shape of MS — the model simply no longer has access to any explicit information about the relative spatial orientation of M ′(c−1)l to MS (and thus must learn this spatial relationship from scratch). As expected, ablating SQUID’s equivariance significantly reduces SQUID’s ability to generate chemically diverse molecules that fit the target shape. Figure 13 plots the distributions of simS(M ′,MS) for the best of Nmax generated molecules with simG(M ′,MS) < 0.7 or 0.3 when using SQUID or SQUID-NoEqui. Crucially, the mean shape similarity when sampling with (λ = 1.0, Nmax = 20, simG(M ′,MS) < 0.7) decreases from 0.828 (SQUID) to 0.805 (SQUID-NoEqui). When sampling with (λ = 0.3, Nmax = 20, simG(M ′,MS) < 0.7), the mean shape similarity also decreases from 0.879 (SQUID) to 0.839 (SQUID-NoEqui). Relative to the mean shape similarity of 0.758 achieved by sampling random molecules from the dataset (Nmax = 20, simG(M ′,MS) < 0.7), this corresponds to a substantial 33% reduction in the shape-enrichment of SQUID-generated molecules. Interestingly, sampling (λ = 1.0, Nmax = 20, simG(M ′,MS) < 0.7) with SQUID-NoEqui still yields shape-enriched molecules compared to analogously sampling random molecules from the dataset (mean shape similarity of 0.805 vs. 0.758). This is because even without the equivariant feature alignment, SQUID-NoEqui still conditions molecular generation on the (invariant) encoding of the target shape S, and hence biases generation towards molecules which better fit the target shape (after alignment with ROCS). A.10 AUXILIARY TRAINING LOSSES We employ two auxiliary losses when training the graph generator in order to encourage the generated graphs to better fit the encoded target shape. The first auxiliary loss penalizes the graph generator if it adds an incorrect atom/fragment to the focus that is of significantly different size than the correct (ground truth) atom/fragment. We first compute a matrix ∆Vf ∈ R (|Lf |×|Lf |) + containing the (pairwise) volume difference between all atoms/fragments in the library Lf ∆V (i,j) f = |vfi − vfj | (15) where vfi is the volume of atom/fragment fi ∈ Lf (computed with RDKit). We then compute the auxiliary loss Lnext-shape as: Lnext-shape = 1 |Lf | (pnext ·∆V(g)f ) (16) where g is the index of the correct (ground truth) next atom/fragment fnext, true, ∆V (g) f is the gth row of ∆Vf , and pnext are the predicted probabilities over the next atom/fragment types to be connected to the focus (see Eq. 10). The second auxiliary loss penalizes the graph generator if it prematurely stops (local) generation, with larger penalties if the premature stop would result in larger portions of the (ground truth) graph not being generated. When predicting (local) stop tokens during graph generation (with teacher forcing), we compute the number of atoms in the subgraph induced by the subtree whose root treenode is the next atom/fragment to be added to the focus (in the current generation sequence). We then multiply the predicted probability for the local stop token by this number of “future” atoms that would not be generated if a premature stop token were generated. Hence, if the correct action is to indeed stop generation around the focus, the penalty will be zero. However, if the correct action is to add a large fragment to the current focus but the generator predicts a stop token, the penalty will be large. Formally, we compute: L∅-shape = p∅|GSTnext | if p∅, true = 0 otherwise 0 (17) where p∅, true is the ground truth action for local stopping (p∅, true = 0 indicates that the correct action is to not stop local generation), and GSTnext is the subgraph induced by the subtree whose root node is the next atom/fragment (to be generated) in the ground-truth molecular graph. A.11 OVERVIEW OF VECTOR NEURONS (VN) OPERATIONS In this work, we use Deng et al. (2021)’s VN-DGCNN to encode molecular point clouds into equivariant shape features. We also employ their general VN operations (VN-MLP, VN-Inv) during shape and chemical feature mixing. We refer readers to Deng et al. (2021) for a detailed description of these equivariant operations and models. Here, we briefly summarize some relevant VN-operations for the reader’s convenience. VN-MLP. Vector neurons (VN) lift scalar neuron features to vector features in R3. Hence, instead of having features x ∈ Rq , we have vector features X̃ ∈ Rq×3. While linear transformations are naturally equivariant to global rotations R since W(X̃R) = (WX̃)R for some rotation matrix R ∈ R3×3, Deng et al. (2021) construct a set of non-linear equivariant operations f̃ such that f̃(X̃R) = f̃(X̃)R, thereby enabling natively equivariant network design. VN-MLPs combine linear transformations with equivariant activations. In this work, we use VNLeakyReLU, which Deng et al. (2021) define as: VN-LeakyReLU(X̃;α) = αX̃+ (1− α)VN-ReLU(X̃) (18) where VN-ReLU(X̃) = x̃, if x̃ · k̃ ||k̃|| ≥ 0 x̃− (x̃ · k̃||k̃|| ) k̃ ||k̃|| otherwise ∀ x̃ ∈ X̃ (19) where k̃ = UX̃ for a learnable weight matrix U ∈ R1×q , and where x̃ ∈ R3. By composing series of linear transformations and equivariant activations, VN-MLPs map X̃ ∈ Rq×3 to X̃′ ∈ Rq′×3 such that X̃′R = VN-MLP(X̃R). VN-Inv. Deng et al. (2021) also define learnable operations that map equivariant features X̃ ∈ Rq×3 to invariant features x ∈ R3q . In general, VN-Inv constructs invariant features by multiplying equivariant features X̃ with other equivariant features Ỹ ∈ R3×3 : X̂ = X̃Ỹ⊤ (20) The invariant features X̂ ∈ Rq×3 can then be reshaped into standard invariant features x ∈ R3q . In our work, we slightly modify Deng et al. (2021)’s original formulation. Given a set of equivariant features X̃ = {X̃(i)} ∈ Rn×q×3, we define a VN-Inv as: VN-Inv(X̃) = X (21) where X = {x(i)} ∈ Rn×6q and: x(i) = Flatten(Ṽ(i)T̃⊤i ) (22) Ṽ(i) = (X̃(i), ∑ i X̃(i)) if n > 1 otherwise X̃(i) (23) T̃i = VN-MLP(Ṽ(i)) (24) where T̃i ∈ R3×3, and Ṽ(i) ∈ R2q×3 (n > 1) or Ṽ(i) ∈ Rq×3 (n = 1). VN-DGCNN. Deng et al. (2021) introduce VN-DGCNN as an SO(3)-equivariant version of the Dynamic Graph Convolutional Neural Network (Wang et al., 2019). Given a point cloud P ∈ Rn×3, VN-DGCNN uses (dynamic) equivariant edge convolutions to update equivariant per-point features: Ẽ(t+1)nm = VN-LeakyReLU (t)(Θ(t)(X̃(t)m − X̃(t)n ) +Φ(t)X̃(t)n ) (25) X̃(t+1)n = ∑ m∈KNNf (n) Ẽ(t+1)nm (26) where KNNf (n) are the k-nearest neighbors of point m in feature space, Φ(t) and Θ(t) are weight matrices, and X̃(t)n ∈ Rq×3 are the per-point equivariant features. A.12 GRAPH NEURAL NETWORKS In this work, we employ graph neural networks (GNNs) to encode: • each atom/fragment in the library Lf • the target molecule MS • each partial molecular structure M ′(c)l during sequential graph generation • the query structures M ′(ψfoc)l+1 when scoring rotatable bonds Our GNNs are loosely based upon a simple version of the EGNN (Satorras et al., 2022b). Given a molecular graph G with atoms as nodes and bonds as edges, we use graph convolutional layers defined by the following: mt+1ij = ϕ t m ( hti,h t j , ||ri − rj ||2,mtij ) (27) mt+1i = ∑ j∈N(i) mij (28) h (t=1) i = ϕ (0) h (h 0 i ,m (t=1) i ) (29) ht+1i = ϕ t h(h t i,m t+1 i ) + h t i (t > 0) (30) where hti are the learned atom embeddings at each GNN-layer, m t ij are learned (directed) messages, ri ∈ R3 are the coordinates of atom i, N(i) is the set of 1-hop bonded neighbors of atom i, and each ϕtm, ϕ t h are MLPs. Note that h 0 i are the initial atom features, and m 0 ij are the initial bond features for the bond between atoms i and j. In general, mtij ̸= mtji for t > 0, but here m0ij = m0ji. Note that since we only aggregate messages from directly bonded neighbors, ||ri − rj || only encodes bond distances, and does not encode any information about specific 3D conformations. Hence, our GNNs effectively only encode 2D chemical identity, as opposed to 3D shape. A.13 FRAGMENT LIBRARY. Our atom/fragment library Lf includes 100 distinct fragments (Fig. 14) and 24 unique atom types. The 100 fragments were selected based on the top-100 most frequently occurring fragments in our training set. In this work, we specify fragments as ring-containing substructures that do not contain any acyclic single bonds. However, in principle fragments could be any (valid) chemical substructure. Note that we only use 1 (geometrically optimized) conformation per fragment, which is assumed to be rigid. Hence, in its current implementation, SQUID does not consider different ring conformations (e.g., boat vs. chair conformations of cyclohexane). A.14 MODEL PARAMETERS Parameter sharing. For both the graph generator and the rotatable bond score, the (variational) molecule encoder (in the Encoder, Fig. 2) and the partial molecule encoder (in the Decoder, Fig. 2) share the same fragment encoder (Lf -GNN), which is trained end-to-end with the rest of the model. Apart from Lf -GNN, these encoders do not share any learnable parameters, despite having parallel architectures. The graph generator and the rotatable bond scorer are completely independent, and are trained separately. Hyperparameters. Tables 6 and 7 tabulate the set of hyperparameters used for SQUID across all the experiments conducted in this paper. Table 8 summarizes training and generation parameters, but we refer the reader to App. A.15 and A.16 for more detailed discussion of training and generation protocols. Because of the large hyperparameter search space and long training times, we did not perform extensive hyperparameter optimizations. We manually tuned the learning rates and schedulers to maintain training stability, and we maxed-out batch sizes given memory constraints. We set β∅-shape = 10 and βnext-shape = 10 to make the magnitudes of L∅-shape and Lnext-shape comparable to the other loss components for graph-generation. We slowly increase βKL over the course of training from 10−5 to a maximum of 10−1, which we found to provide a reasonable balance between LKL and graph reconstruction. Generation parameters A.15 ADDITIONAL TRAINING DETAILS Dataset. We use molecules from MOSES (Polykovskiy et al., 2020) to train, validate, and test SQUID. Starting from the train/test sets provided by MOSES, we first generate an RDKit conformer for each molecule, and remove any molecules for which we cannot generate a conformer. Conformers are initially created with the ETKDG algorithm in RDKit, and then separately optimized for 200 iterations with the MMFF force field. We then fix the acyclic bond distances and bond angles for each conformer (App. A.8). Using the molecules from MOSES’s train set, we then create the fragment library by extracting the top-100 most frequently occurring fragments (ring-containing substructures without acylic bonds). We separately generate a 3D conformer for each distinct fragment, optimizing the fragment structures with MMFF for 1000 steps. Given these 100 fragments, we then remove all molecules from the train and test sets containing non-included fragments. From the filtered training set, we then extract 24 unique atom types, which we add to the atom/fragment library Lf . We remove any molecule in the test set that contains an atom type not included in these 24. Finally, we randomly split the (filtered) training set into separate training/validation splits. The training split contains 1058352 molecules, the validation split contains 264589 molecules, and the test set contains 146883 molecules. Each molecule has one conformer. Collecting training data for graph generation and scoring. We individually supervise each step of autoregressive graph generation and use teacher forcing. We collect the ground-truth generation actions by representing each molecular graph as a tree whose root tree-node is either a terminal atom or a terminal fragment in the graph. A “terminal” atom is only bonded to one neighboring atom. A “terminal” fragment has only one acyclic (rotatable) bond to a neighboring atom/fragment. Starting from this terminal atom/fragment, we construct the molecule according to a breadth-firstsearch traversal of the generation tree (see Fig. 2); we break ties using RDKit’s canonical atom ordering. We augment the data by enumerating all generation trees starting from each possible terminal atom/fragment in the molecule. For each rotatable bond in the generation trees, we collect regression targets for training the scorer by following the procedure outlined in App. A.2. Batching. When training the graph generator, we batch together graph-generative actions which are part of the same generation sequence (e.g., generating G′(c)l from G ′(c−1) l ). Otherwise, generation sequences are treated independently. When training the rotatable bond scorer, we batch together different query dihedrals ψfoc of the same focal bond. Rather than scoring all 36 rotation angles in the same batch, we include the ground-truth rotation angle and randomly sample 9/35 others to include in the batch. Within each batch (for both graph-generation and scoring), all the encoded molecules MS are constrained to have the same number of atoms, and all the partial molecular structures G ′(c) l are constrained to have the same number of atoms. This restriction on batch composition is purely for convenience: the public implementation of VN-DGCNN from Deng et al. (2021) is designed to train on point clouds with the same number of points, and we construct point clouds by sampling a (fixed) np points for each atom. Training setup. We train the graph generator and the rotatable bond scorer separately. For the graph generator, we train for 2M iterations (batches), with a maximum batch size of 400 (generation sequences). We use the Adam optimizer with default parameters. We use an initial learning rate of 2.5 × 10−4, which we exponentially decay by a factor of 0.9 every 50K iterations to a minimum of 5 × 10−6. We weight the auxiliary losses by βnext-shape = 10.0 and β∅-shape = 10.0. We log-linearly increase βKL from 10−5 to 10−1 over the first 1M iterations, after which it remains constant at 10−1. For each generation sequence, we randomize the rotation angle of the bond connecting the focus to the rest of the partial graph (e.g., the focal dihedral), as this dihedral has yet to be scored. In order to make the graph generator more robust to imperfect rotatable bond scoring at generation time, during training, we perturb the dihedrals of each rotatable bond in the partially generated structure M ′l by δψ ∼ N(µ = 0◦, σ = 15◦) while fixing the coordinates of the focus. For the rotatable bond scorer, we train for 2M iterations (baches), with a maximum batch size of 32 (focal
1. What is the focus of the paper regarding molecular generation? 2. What are the strengths of the proposed approach, particularly in its task-specific model design? 3. What are the weaknesses of the paper, especially regarding its contributions to the machine learning community? 4. Do you have any concerns about the scoring rotatable bonds or the periodicity of the rolling bonds? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 6. Are there any questions regarding the generated molecules' stability and fitness to the required shape?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper designs a novel framework to generate molecules regarding to the point cloud of molecules. The authors make a lot of effort into building the model consisting of VN-DGCNN, GNN, scoring rotatable bonds and so on. Strengths And Weaknesses Strength: The paper proposes a complex generative system for molecular generation with very detailed task-specific model design. In order to build the molecules with 3D conformation, the proposed method introduces a novel scoring rotatable bonds to learn the rolling angles of each fragment. The proposed method resolves a critical problem as scaffold hopping in drug design. The experimental results show the benefits of the proposed method. Weakness: The contribution to the machine learning community is limited. The whole model is based on a CVAE framework with two embedding networks, VG-DGCNN and GNN and one fragment-based sequential generative decoder. The scoring rotatable bonds is a novel contribution but incremental to the ML community. The rolling bonds of the molecules should contain periodicity. However, the output of scoring rotatable bonds is the output of a sigmoid function which is not a periodic function. Since the base framework follows a CVAE, is the proposed method able to reconstruct the molecules in terms of the point cloud and 2D graph? It is better to report the validity, uniqueness, novelty, and reconstruction rate to indicate whether the model learns the information of molecules properly. I am curious given a fixed shape, what is the diversity of the generated molecules? Are all generated conformers stable at the lowest energy and still fit to the required shape? Clarity, Quality, Novelty And Reproducibility The paper is under very high clarity, quality and reproducibility. However, the novelty to the machine learning community seems to be weak.
ICLR
Title Equivariant Shape-Conditioned Generation of 3D Molecules for Ligand-Based Drug Design Abstract Shape-based virtual screening is widely used in ligand-based drug design to search chemical libraries for molecules with similar 3D shapes yet novel 2D graph structures compared to known ligands. 3D deep generative models can potentially automate this exploration of shape-conditioned 3D chemical space; however, no existing models can reliably generate geometrically realistic drug-like molecules in conformations with a specific shape. We introduce a new multimodal 3D generative model that enables shape-conditioned 3D molecular design by equivariantly encoding molecular shape and variationally encoding chemical identity. We ensure local geometric and chemical validity of generated molecules by using autoregressive fragment-based generation with heuristic bonding geometries, allowing the model to prioritize the scoring of rotatable bonds to best align the growing conformation to the target shape. We evaluate our 3D generative model in tasks relevant to drug design including shape-conditioned generation of chemically diverse molecular structures and shape-constrained molecular property optimization, demonstrating its utility over virtual screening of enumerated libraries. 1 INTRODUCTION Generative models for de novo molecular generation have revolutionized computer-aided drug design (CADD) by enabling efficient exploration of chemical space, goal-directed molecular optimization (MO), and automated creation of virtual chemical libraries (Segler et al., 2018; Meyers et al., 2021; Huang et al., 2021; Wang et al., 2022; Du et al., 2022; Bilodeau et al., 2022). Recently, several 3D generative models have been proposed to directly generate low-energy or (bio)active molecular conformations using 3D convolutional networks (CNNs) (Ragoza et al., 2020), reinforcement learning (RL) (Simm et al., 2020a;b), autoregressive generators (Gebauer et al., 2022; Luo & Ji, 2022), or diffusion models (Hoogeboom et al., 2022). These methods have especially enjoyed accelerated development for structure-based drug design (SBDD), where models are trained to generate druglike molecules in favorable binding poses inside an explicit protein pocket (Drotár et al., 2021; Luo et al., 2022; Liu et al., 2022; Ragoza et al., 2022). However, SBDD requires atomically-resolved structures of a protein target, assumes knowledge of binding sites, and often ignores dynamic pocket flexibility, rendering these methods less effective in many CADD settings. Ligand-based drug design (LBDD) does not assume knowledge of protein structure. Instead, molecules are compared against previously identified “actives” on the basis of 3D pharmacophore or 3D shape similarity under the principle that molecules with similar structures should share similar activity (Vázquez et al., 2020; Cleves & Jain, 2020). In particular, ROCS (Rapid Overlay of Chemical Structures) is commonly used as a shape-based virtual screening tool to identify molecules with similar shapes to a reference inhibitor and has shown promising results for scaffold-hopping tasks (Rush et al., 2005; Hawkins et al., 2007; Nicholls et al., 2010). However, virtual screening relies on enumeration of chemical libraries, fundamentally restricting its ability to probe new chemical space. Here, we consider the novel task of generating chemically diverse 3D molecular structures conditioned on a molecular shape, thereby facilitating the shape-conditioned exploration of chemical space without the limitations of virtual screening (Fig. 1). Importantly, shape-conditioned 3D molecular generation presents unique challenges not encountered in typical 2D generative models: Challenge 1. 3D shape-based LBDD involves pairwise comparisons between two arbitrary conformations of arbitrary molecules. Whereas traditional property-conditioned generative models or MO algorithms shift learned data distributions to optimize a single scalar property, a shape-conditioned generative model must generate molecules adopting any reasonable shape encoded by the model. Challenge 2. Shape similarity metrics that compute volume overlaps between two molecules (e.g., ROCS) require the molecules to be aligned in 3D space. Unlike 2D similarity, the computed shape similarity between the two molecules will change if one of the structures is rotated. This subtly impacts the learning problem: if the model encodes the target 3D shape into an SE(3)-invariant representation, the model must learn how the generated molecule would fit the target shape under the implicit action of an SE(3)-alignment. Alternatively, if the model can natively generate an aligned structure, then the model can more easily learn to construct molecules that fit the target shape. Challenge 3. A molecule’s 2D graph topology and 3D shape are highly dependent; small changes in the graph can strikingly alter the shapes accessible to a molecule. It is thus unlikely that a generative model will reliably generate chemically diverse molecules with similar shapes to an encoded target without 1) simultaneous graph and coordinate generation; and 2) explicit shape-conditioning. Challenge 4. The distribution of shapes a drug-like molecule can adopt is chiefly influenced by rotatable bonds, the foremost source of molecular flexibility. However, existing 3D generative models are mainly developed using tiny molecules (e.g., fewer than 10 heavy atoms), and cannot generate flexible drug-like molecules while maintaining chemical validity (satisfying valencies), geometric validity (non-distorted bond distances and angles; no steric clashes), and chemical diversity. To surmount these challenges, we design a new generative model, SQUID1, to enable the shapeconditioned generation of chemically diverse molecules in 3D. Our contributions are as follows: • Given a 3D molecule with a target shape, we use equivariant point cloud networks to encode the shape into (rotationally) equivariant features. We then use graph neural networks (GNNs) to variationally encode chemical identity into invariant features. By mixing chemical features with equivariant shape features, we can generate diverse molecules in aligned poses that fit the shape. • We develop a sequential fragment-based 3D generation procedure that fixes local bond lengths and angles to prioritize the scoring of rotatable bonds. By massively simplifying 3D coordinate generation, we generate drug-like molecules while maintaining chemical and geometric validity. • We design a rotatable bond scoring network that learns how local bond rotations affect global shape, enabling our decoder to generate 3D conformations that best fit the target shape. We evaluate the utility of SQUID over virtual screening in shape-conditioned 3D molecular design tasks that mimic ligand-based drug design objectives, including shape-conditioned generation of diverse 3D structures and shape-constrained molecular optimization. To inspire further research, we note that our tasks could also be approached with a hypothetical 3D generative model that disentangles latent variables controlling 2D chemical identity and 3D shape, thus enabling zero-shot generation of topologically distinct molecules with similar shapes to any encoded target. 1SQUID: Shape-Conditioned Equivariant Generator for Drug-Like Molecules 2 RELATED WORK Fragment-based molecular generation. Seminal works in autoregressive molecular generation applied language models to generate 1D SMILES strings character-by-character (Gómez-Bombarelli et al., 2018; Segler et al., 2018), or GNNs to generate 2D molecular graphs atom-by-atom (Liu et al., 2018; Simonovsky & Komodakis, 2018; Li et al., 2018). Recent works construct molecules fragment-by-fragment to improve the chemical validity of intermediate graphs and to scale generation to larger molecules (Podda et al., 2020; Jin et al., 2019; 2020). Our fragment-based decoder is related to MoLeR (Maziarz et al., 2022), which iteratively generates molecules by selecting a new fragment (or atom) to add to the partial graph, choosing attachment sites on the new fragment, and predicting new bonds to the partial graph. Yet, MoLeR only generates 2D graphs; we generate 3D molecular structures. Beyond 2D generation, Flam-Shepherd et al. (2022) use an RL agent to generate 3D molecules by sampling and connecting molecular fragments. However, they sample from a small multiset of fragments, restricting the accessible chemical space. Powers et al. (2022) use fragments to generate 3D molecules inside a protein pocket, but only consider 7 distinct rings. Generation of drug-like molecules in 3D. In this work, we generate novel drug-like 3D molecular structures in free space, e.g., not conformers given a known molecular graph (Ganea et al., 2021; Jing et al., 2022). Myriad models have been proposed to generate small 3D molecules such as E(3)-equivariant normalizing flows and diffusion models (Satorras et al., 2022a; Hoogeboom et al., 2022), RL agents with an SE(3)-covariant action space (Simm et al., 2020b), and autoregressive generators that build molecules atom-by-atom with SE(3)-invariant internal coordinates (Luo & Ji, 2022; Gebauer et al., 2022). However, fewer 3D generative models can generate larger drug-like molecules for realistic chemical design tasks. Of these, Hoogeboom et al. (2022) and Arcidiacono & Koes (2021) fail to generate chemically valid molecules, while Ragoza et al. (2020) rely on postprocessing and geometry relaxation to extract stable molecules from their generated atom density grids. Only Roney et al. (2021) and Li et al. (2021), who develop autoregressive generators that simultaneously predict graph structure and internal coordinates, have shown to reliably generate valid drug-like molecules. We also couple graph generation with 3D coordinate prediction; however, we employ fragment-based generation with fixed local geometries to ensure local chemical and geometric validity. Futher, we focus on shape-conditioned molecular design; none of these works can natively address the aforementioned challenges posed by shape-conditioned molecular generation. Shape-conditioned molecular generation. Other works partially address shape-conditioned 3D molecular generation. Skalic et al. (2019) and Imrie et al. (2021) train networks to generate 1D SMILES strings or 2D molecular graphs conditioned on CNN encodings of 3D pharmacophores. However, they do not generate 3D structures, and the CNNs do not respect Euclidean symmetries. Zheng et al. (2021) use supervised molecule-to-molecule translation on SMILES strings for scaffold hopping tasks, but do not generate 3D structures. Papadopoulos et al. (2021) use REINVENT (Olivecrona et al., 2017) on SMILES strings to propose molecules whose conformers are shapesimilar to a target, but they must re-optimize the agent for each target shape. Roney et al. (2021) fine-tune a 3D generative model on the hits of a ROCS virtual screen of > 1010 drug-like molecules to shift the learned distribution towards a target shape. Yet, this expensive screening approach must be repeated for each new target. Instead, we seek to achieve zero-shot generation of 3D molecules with similar shapes to any encoded shape, without requiring fine-tuning or post facto optimization. Equivariant geometric deep learning on point clouds. Various equivariant networks have been designed to encode point clouds for updating coordinates in R3 (Satorras et al., 2022b), predicting tensorial properties (Thomas et al., 2018), or modeling 3D structures natively in Cartesian space (Fuchs et al., 2020). Especially noteworthy are architectures which lift scalar neuron features to vector features in R3 and employ simple operations to mix invariant and equivariant features without relying on expensive higher-order tensor products or Clebsh-Gordan coefficients (Deng et al., 2021; Jing et al., 2021). In this work, we employ Deng et al. (2021)’s Vector Neurons (VN)-based equivariant point cloud encoder VN-DGCNN to encode molecules into equivariant latent representations in order to generate molecules which are natively aligned to the target shape. Two recent works also employ VN operations for structure-based drug design and linker design (Peng et al., 2022; Huang et al., 2022). Huang et al. (2022) also build molecules in free space; however, they generate just a few atoms to connect existing fragments and do not condition on molecular shape. 3 METHODOLOGY Problem definition. We model a conditional distribution P (M |S) over 3D moleculesM = (G,G) with graph G and atomic coordinates G = {ra ∈ R3} given a 3D molecular shape S. Specifically, we aim to sample molecules M ′ ∼ P (M |S) with high shape similarity (simS(M ′,MS) ≈ 1) and low graph (chemical) similarity (simG(M ′,MS) < 1) to a target molecule MS with shape S. This scheme differs from 1) typical 3D generative models that learn P (M) without modeling P (M |S), and from 2) shape-conditioned 1D/2D generators that attempt to model P (G|S), the distribution of molecular graphs that could adopt shape S, but do not actually generate specific 3D conformations. We define graph (chemical) similarity simG ∈ [0, 1] between two molecules as the Tanimoto similarity computed by RDKit with default settings (2048-bit fingerprints). We define shape similarity sim∗S ∈ [0, 1] using Gaussian descriptions of molecular shape, modeling atoms a ∈ MA and b ∈ MB from molecules MA and MB as isotropic Gaussians in R3 (Grant & Pickup, 1995; Grant et al., 1996). We compute sim∗S using (2-body) volume overlaps between atom-centered Gaussians: sim∗S(GA,GB) = VAB VAA + VBB − VAB ; VAB = ∑ a∈A,b∈B Vab; Vab ∝ exp ( −α 2 ||ra−rb||2 ) , (1) where α controls the Gaussian width. Setting α = 0.81 approximates the shape similarity function used by the ROCS program (App. A.6). sim∗S is sensitive to SE(3) transformations of molecule MA with respect to moleculeMB . Thus, we define simS(MA,MB) = max R,t sim∗S(GAR+t,GB) as the shape similarity when MA is optimally aligned to MB . We perform such alignments with ROCS. Approach. At a high level, we model P (M |S) with an encoder-decoder architecture. Given a molecule MS = (GS ,GS) with shape S, we encode S (a point cloud) into equivariant features. We then variationally encode GS into atomic features, conditioned on the shape features. We then mix these shape and atom features to pass global SE(3) {in,equi}variant latent codes to the decoder, which samples new molecules from P (M |S). We autoregressively generate molecules by factoring P (M |S) = P (M0|S)P (M1|M0, S)...P (M |Mn−1, S), where each Ml = (Gl,Gl) are partial molecules defined by a BFS traversal of a tree-representation of the molecular graph (Fig. 2). Tree-nodes denote either non-ring atoms or rigid (ring-containing) fragments, and tree-links denote acyclic (rotatable, double, or triple) bonds. We generate Ml+1 by growing the graph Gl+1 around a focus atom/fragment, and then predict Gl+1 by scoring a query rotatable bond to best fit shape S. Simplifying assumptions. (1) We ignore hydrogens and only consider heavy atoms, as is common in molecular generation. (2) We only consider molecules with fragments present in our fragment library to ensure that graph generation can be expressed as tree generation. (3) Rather than generating all coordinates, we use rigid fragments, fix bond distances, and set bond angles according to hybridization heuristics (App. A.8); this lets the model focus on scoring rotatable bonds to best fit the growing conformer to the encoded shape. (4) We seed generation with M0 (the root tree-node), restricted to be a small (3-6 atoms) substructure from MS ; hence, we only model P (M |S,M0). 3.1 ENCODER Featurization. We construct a molecular graph G using atoms as nodes and bonds as edges. We featurize each node with the atomic mass; one-hot codes of atomic number, charge, and aromaticity; and one-hot codes of the number of single, double, aromatic, and triple bonds the atom forms (including bonds to implicit hydrogens). This helps us fix bond angles during generation (App. A.8). We featurize each edge with one-hot codes of bond order. We represent a shape S as a point cloud built by sampling np points from each of nh atom-centered Gaussians with (adjustable) variance σ2p. Fragment encoder. We also featurize each node with a learned embedding fi ∈ Rdf of the atom/fragment type to which that atom belongs, making each node “fragment-aware” (similar to MoLeR). In principle, fragments could be any rigid substructure with ≥ 2 atoms. Here, we specify fragments as ring-containing substructures without acyclic single bonds (Fig. 14). We construct a library Lf of atom/fragment types by extracting the top-k (k = 100) most frequent fragments from the dataset and adding these, along with each distinct atom type, to Lf (App. A.13). We then encode each atom/fragment in Lf with a simple GNN (App. A.12) to yield the global atom/fragment embeddings: {fi = ∑ a h (a) fi , {h(a)fi } = GNNLf (Gfi) ∀fi ∈ Lf}, where h (a) fi are per-atom features. Shape encoder. Given MS with nh heavy atoms, we use VN-DGCNN (App. A.11) to encode the molecular point cloud PS ∈ R(nhnp)×3 into a set of equivariant per-point vector features X̃p ∈ R(nhnp)×q×3. We then locally mean-pool the np equivariant features per atom: X̃p = VN-DGCNN(PS); X̃ = LocalPool(X̃p), (2) where X̃ ∈ Rnh×q×3 are per-atom equivariant representations of the molecular shape. Because VN operations are SO(3)-equivariant, rotating the point cloud will rotate X̃: X̃R = LocalPool(VN-DGCNN(PSR)). Although VN operations are strictly SO(3)-equivariant, we subtract the molecule’s centroid from the atomic coordinates prior to encoding, making X̃ effectively SE(3)-equivariant. Throughout this work, we denote SO(3)-equivariant vector features with tildes. Variational graph encoder. To model P (M |S), we first use a GNN (App. A.12) to encodeGS into learned atom embeddings H = {h(a) ∀ a ∈ GS}. We condition the GNN on per-atom invariant shape features X = {x(a)} ∈ Rnh×6q , which we form by passing X̃ through a VN-Inv (App. A.11): H = GNN((H0,X);GS); X = VN-Inv(X̃), (3) where H0 ∈ Rnh×(da+df ) are the set of initial atom features concatenated with the learned fragment embeddings, H ∈ Rnh×dh , and (·, ·) denotes concatenation in the feature dimension. For each atom in MS , we then encode h (a) µ ,h (a) log σ2 = MLP(h (a)) and sample h(a)var ∼ N(h(a)µ ,h(a)σ ): Hvar = { h(a)var = h (a) µ + ϵ (a) ⊙ h(a)σ ; h(a)σ = exp( 1 2 h (a) log σ2) ∀ a ∈ GS } , (4) where ϵ(a) ∼ N(0,1) ∈ Rdh , Hvar ∈ Rnh×dh , and ⊙ denotes elementwise multiplication. Here, the second argument of N(·, ·) is the standard deviation vector of the diagonal covariance matrix. Mixing shape and variational features. The variational atom features Hvar are insensitive to rotations of S. However, we desire the decoder to construct molecules in poses that are natively aligned to S (Challenge 2). We achieve this by conditioning the decoder on an equivariant latent representation of P (M |S) that mixes both shape and chemical information. Specifically, we mix Hvar with X̃ by encoding each h(a)var ∈ Hvar into linear transformations, which are applied atom-wise to X̃. We then pass the mixed equivariant features through a separate VN-MLP (App. A.11): X̃Hvar = { VN-MLP(W(a)H X̃ (a), X̃(a)); W (a) H = Reshape(MLP(h (a) var )) ∀ a ∈ GS } , (5) where W(a)H ∈ Rq ′×q , X̃(a) ∈ Rq×3, and X̃Hvar ∈ Rnh×dz×3. This maintains equivariance since W (a) H are rotationally invariant and W (a) H (X̃ (a)R) = (W (a) H X̃ (a))R for a rotation R. Finally, we sum-pool the per-atom features in X̃Hvar into a global equivariant representation Z̃ ∈ Rdz×3. We also embed a global invariant representation z ∈ Rdz by applying a VN-Inv to X̃Hvar , concatenating the output with Hvar, passing through an MLP, and sum-pooling the resultant per-atom features: Z̃ = ∑ a X̃ (a) Hvar ; z = ∑ a MLP(x(a)Hvar ,h (a) var ); x (a) Hvar = VN-Inv(X̃(a)Hvar). (6) 3.2 DECODER Given MS , we sample new molecules M ′ ∼ P (M |S,M0) by encoding PS into equivariant shape features X̃, variationally sampling h(a)var for each atom in MS , mixing Hvar with X̃, and passing the resultant (Z̃, z) to the decoder. We seed generation with a small structure M0 (extracted from MS), and build M ′ by sequentially generating larger structures M ′l+1 in a tree-like manner (Fig. 2). Specifically, we grow new atoms/fragments around a “focus” atom/fragment in M ′l , which is popped from a BFS queue. To generate M ′l+1 from M ′ l (e.g., grow the tree from the focus), we factor P (Ml+1|Ml, S) = P (Gl+1|Ml, S)P (Gl+1|Gl+1,Ml, S). Given (Z̃, z), we sample the new graphG′l+1 by iteratively attaching (a variable)C new atoms/fragments (children tree-nodes) around the focus, yielding G′(c)l for c = 1, ..., C, where G ′(C) l = G ′ l+1 and G ′(0) l = G ′ l. We then generate coordinates G′l+1 by scoring the (rotatable) bond between the focus and its parent tree-node. New bonds from the focus to its children are left unscored in M ′l+1 until the children become “in focus”. Partial molecule encoder. Before bonding each new atom/fragment to the focus (or scoring bonds), we encode the partial molecule M ′(c−1)l with the same scheme as for MS (using a parallel encoder; Fig. 2), except we do not variationally embed H′.2 Instead, we process H′ analogously to Hvar. Further, in addition to globally pooling the per-atom embeddings to obtain Z̃′ = ∑ a X̃ ′(a) H and z′ = ∑ a x ′(a) H , we also selectively sum-pool the embeddings of the atom(s) in focus, yielding Z̃′foc = ∑ a∈focus X̃ ′(a) H and z ′ foc = ∑ a∈focus x ′(a) H . We then align the equivariant representations of M ′(c−1) l and MS by concatenating Z̃, Z̃ ′, Z̃− Z̃′, and Z̃′foc and passing these through a VN-MLP: Z̃dec = VN-MLP(Z̃, Z̃′, Z̃− Z̃′, Z̃′foc). (7) Note that Z̃dec ∈ Rq×3 is equivariant to rotations of the overall system (M ′(c−1)l ,MS). Finally, we form a global invariant feature zdec ∈ Rddec to condition graph (or coordinate) generation: zdec = (VN-Inv(Z̃dec), z, z′, z− z′, z′foc). (8) Graph generation. We factor P (Gl+1|Ml, S) into a sequence of generation steps by which we iteratively connect children atoms/fragments to the focus until the network generates a (local) stop token. Fig. 2 sketches a generation sequence by which a new atom/fragment is attached to the focus, yielding G′(c)l from G ′(c−1) l . Given zdec, the model first predicts whether to stop (local) generation via p∅ = sigmoid(MLP∅(zdec)) ∈ (0, 1). If p∅ ≥ τ∅ (a threshold, App. A.16), we stop and proceed to bond scoring. Otherwise, we select which atom afoc on the focus (if multiple) to grow from: pfocus = softmax({MLPfocus(zdec,x′(a)H ) ∀ a ∈ focus}). (9) The decoder then predicts which atom/fragment fnext ∈ Lf to connect to the focus next: pnext = softmax({MLPnext(zdec,x′(afoc)H , ffi) ∀ fi ∈ Lf}). (10) 2We have dropped the (c) notation for clarity. However, each Zdec is specific to each (M ′(c−1) l ,MS) system. If the selected fnext is a fragment, we predict the attachment site asite on the fragment Gfnext : psite = softmax({MLPsite(zdec,x′(afoc)H , fnext,h (a) fnext ) ∀ a ∈ Gfnext}), (11) where h(a)fnext are the encoded atom features for Gfnext . Lastly, we predict the bond order (1 ◦, 2◦, 3◦) via pbond = softmax(MLPbond(zdec,x ′(afoc) H , fnext,h (asite) fnext )). We repeat this sequence of steps until p∅ ≥ τ∅, yielding Gl+1. At each step, we greedily select the action after masking actions that violate known chemical valence rules. After each sequence, we bond a new atom or fragment to the focus, giving G′(c)l . If an atom, the atom’s position relative to the focus is fixed by heuristic bonding geometries (App. A.8). If a fragment, the position of the attachment site is fixed, but the dihedral of the new bond is yet unknown. Thus, in subsequent generation steps we only encode the attachment site and mask the remaining atoms in the new fragment until that fragment is “in focus” (Fig. 2). This means that prior to bond scoring, the rotation angle of the focus is random. To account for this when training (with teacher forcing), we randomize the focal dihedral when encoding eachM ′(c−1)l . Scoring rotatable bonds. After sampling G′l+1 ∼ P (Gl+1|M ′l , S), we generate G′l+1 by scoring the rotation angle ψ′l+1 of the bond connecting the focus to its parent node in the generation tree (Fig. 2). Since we ultimately seek to maximize simS(M ′,MS), we exploit the fact that our model generates shape-aligned structures to predict max ψ′l+2,ψ ′ l+3,... sim∗S(G ′(ψfoc),GS) for various query di- hedrals ψ′l+1 = ψfoc of the focus rotatable bond in a supervised regression setting. Intuitively, the scorer is trained to predict how the choice of ψfoc affects the maximum possible shape similarity of the final molecule M ′ to the target MS under an optimal policy. App. A.2 details how regression targets are computed. During generation, we sweep over each query ψfoc ∈ [−π, π), encode each resultant structure M ′(ψfoc) l+1 into z (ψfoc) dec, scorer 3, and select the ψfoc that maximizes the predicted score: ψ′l+1 = argmax ψfoc sigmoid(MLPscorer(z (ψfoc) dec, scorer)). (12) At generation time, we also score chirality by enumerating stereoisomers Gχfoc ∈ G′foc of the focus and selecting the (Gχfoc, ψfoc) that maximizes Eq. 12 (App. A.2). Training. We supervise each step of graph generation with a multi-component loss function: Lgraph-gen = L∅+Lfocus+Lnext+Lsite+Lbond+βKLLKL+βnext-shapeLnext-shape+β∅-shapeL∅-shape. (13) L∅, Lfocus, Lnext, and Lbond are standard cross-entropy losses. Lsite = − log( ∑ a p (a) site I[ca > 0]) is a modified cross-entropy loss that accounts for symmetric attachment sites in the fragmentsGfi ∈ Lf , where p(a)site are the predicted attachment-site probabilities and ca are multi-hot class probabilities. LKL is the KL-divergence between the learned N (hµ,hσ) and the prior N (0,1). We also employ two auxiliary losses Lnext-shape and L∅-shape in order to 1) help the generator distinguish between incorrect shape-similar (near-miss) vs. shape-dissimilar fragments, and 2) encourage the generator to generate structures that fill the entire target shape (App. A.10). We train the rotatable bond scorer separately from the generator with an MSE regression loss. See App. A.15 for training details. 4 EXPERIMENTS Dataset. We train SQUID with drug-like molecules (up to nh = 27) from MOSES (Polykovskiy et al., 2020) using their train/test sets. Lf includes 100 fragments extracted from the dataset and 24 atom types. We remove molecules that contain excluded fragments. For remaining molecules, we generate a 3D conformer with RDKit, set acyclic bond distances to their empirical means, and fix acyclic bond angles using heuristic rules. While this 3D manipulation neglects distorted bonding geometries in real molecules, the global shapes are marginally impacted, and we may recover refined geometries without seriously altering the shape (App. A.8). The final dataset contains 1.3M 3D molecules, partitioned into 80/20 train/validation splits. The test set contains 147K 3D molecules. 3We train the scorer independently from the graph generator, but with a parallel architecture. Hence, zdec ̸= zdec, scorer. The main architectural difference between the two models (graph generator and scorer) is that we do not variationally encode Hscorer into Hvar,scorer, as we find it does not impact empirical performance. In the following experiments, we only consider molecules MS for which we can extract a small (3-6 atoms) 3D substructure M0 containing a terminal atom, which we use to seed generation. In principle,M0 could include larger structures fromMS , e.g., for scaffold-constrained tasks. Here, we use the smallest substructures to ensure that the shape-conditioned generation tasks are not trivial. Shape-conditioned generation of chemically diverse molecules. “Scaffold-hopping”—designing molecules with high 3D shape similarity but novel 2D graph topology compared to known inhibitors—is pursued in LBDD to develop chemical lead series, optimize drug activity, or evade intellectual property restrictions (Hu et al., 2017). We imitate this task by evaluating SQUID’s ability to generate molecules M ′ with high simS(M ′,MS) but low simG(M ′,MS). Specifically, for 1000 molecules MS with target shapes S in the test set, we use SQUID to generate 50 molecules per MS . To generate chemically diverse species, we linearly interpolate between the posterior N(hµ,hσ) and the prior N(0,1), sampling each hvar ∼ N((1− λ)hµ, (1− λ)hσ + λ1) using either λ = 0.3 or λ = 1.0 (prior). We then filter the generated molecules to have simG(M ′,MS) < 0.7, or < 0.3 to only evaluate molecules with substantial chemical differences compared to MS . Of the filtered molecules, we randomly choose Nmax samples and select the sample with highest simS(M ′,MS). Figure 3A plots distributions of simS(M ′,MS) between the selected molecules and their respective target shapes, using different sampling (Nmax = 1, 20) and filtering (simG(M ′,MS) < 0.7, 0.3) schemes. We compare against analogously sampling random 3D molecules from the training set. Overall, SQUID generates diverse 3D molecules that are quantitatively enriched in shape similarity compared to molecules sampled from the dataset, particularly for Nmax = 20. Qualitatively, the molecules generated by SQUID have significantly more atoms which directly overlap with the atoms of MS , even in cases where the computed shape similarity is comparable between SQUIDgenerated molecules and molecules sampled from the dataset (Fig. 3C). We quantitatively explore this observation in App. A.7. We also find that using λ = 0.3 yields greater simS(M ′,MS) than λ = 1.0, in part because using λ = 0.3 yields less chemically diverse molecules (Fig. 3B; Challenge 3). Even so, sampling Nmax = 20 molecules from the prior with simG(M ′,MS) < 0.3 still yields more shape-similar molecules than sampling Nmax = 500 molecules from the dataset. We emphasize that 99% of samples from the prior are novel, 95% are unique, and 100% are chemically valid (App. A.4). Moreover, 87% of generated structures do not have any steric clashes (App. A.4), indicating that SQUID generates realistic 3D geometries of the flexible drug-like molecules. Ablating equivariance. SQUID’s success in 3D shape-conditioned molecular generation is partly attributable to SQUID aligning the generated structures to the target shape in equivariant feature space (Eq. 7), which enables SQUID to generate 3D structures that fit the target shape without having to implicitly learn how to align two structures in R3 (Challenge 2). We explicitly validate this design choice by setting Z̃ = 0 in Eq. 7, which prevents the decoder from accessing the 3D orientation of MS during training/generation. As expected, ablating SQUID’s equivariance reduces the enrichment in shape similarity (relative to the dataset baseline) by as much as 33% (App. A.9). Shape-constrained molecular optimization. Scaffold-hopping is often goal-directed; e.g., aiming to reduce toxicity or improve bioactivity of a hit compound without altering its 3D shape. We mimic this shape-constrained MO setting by applying SQUID to optimize objectives from GaucaMol (Brown et al., 2019) while preserving high shape similarity (simS(M,MS) ≥ 0.85) to various “hit” 3D molecules MS from the test set. This task considerably differs from typical MO tasks, which optimize objectives without constraining 3D shape and without generating 3D structures. To adapt SQUID to shape-constrained MO, we implement a genetic algorithm (App. A.5) that iteratively mutates the variational atom embeddings Hvar of encoded seed molecules (“hits”) MS in order to generate 3D molecules M∗ with improved objective scores, but which still fit the shape of MS . Table 1 reports the optimized top-1 scores across 6 objectives and 8 seed molecules MS (per objective, sampled from the test set), constrained such that simS(M∗,MS) ≥ 0.85. We compare against the score ofMS , as well as the (shape-constrained) top-1 score obtained by virtual screening (VS) our training dataset (>1M 3D molecules). Of the 8 seeds MS per objective, 3 were selected from top-scoring molecules to serve as hypothetical “hits”, 3 were selected from top-scoring large molecules (≥ 26 heavy atoms), and 2 were randomly selected from all large molecules. In 40/48 tasks, SQUID improves the objective score of the seed MS while maintaining simS(M∗,MS) ≥ 0.85. Qualitatively, SQUID optimizes the objectives through chemical alterations such as adding/deleting individual atoms, switching bonding patterns, or replacing entire substructures – all while generating 3D structures that fit the target shape (App. A.5). In 29/40 of successful cases, SQUID (limited to 31K samples) surpasses the baseline of virtual screening 1M molecules, demonstrating the ability to efficiently explore new shape-constrained chemical space. 5 CONCLUSION We designed a novel 3D generative model, SQUID, to enable shape-conditioned exploration of chemically diverse molecular space. SQUID generates realistic 3D geometries of larger molecules that are chemically valid, and uniquely exploits equivariant operations to construct conformations that fit a target 3D shape. We envision our model, alongside future work, will advance creative shape-based drug design tasks such as 3D scaffold hopping and shape-constrained 3D ligand design. REPRODUCIBILITY STATEMENT We have taken care to facilitate the reproduciblility of this work by detailing the precise architecture of SQUID throughout the main text; we also provide extensive details on training protocols, model parameters, and further evaluations in the Appendices. Our source code can be found at https://github.com/keiradams/SQUID. Beyond the model implementation, our code includes links to access our datasets, as well as scripts to process the training dataset, train the model, and evaluate our trained models across the shape-conditioned generation and shape-constrained optimization tasks described in this paper. ETHICS STATEMENT Advancing the shape-conditioned 3D generative modeling of drug-like molecules has the potential to accelerate pharmaceutical drug design, showing particular promise for drug discovery campaigns involving scaffold hopping, hit expansion, or the discovery of novel ligand analogues. However, such advancements could also be exploited for nefarious pharmaceutical research and harmful biological applications. ACKNOWLEDGMENTS This research was supported by the Office of Naval Research under grant number N00014-21-12195. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 2141064. The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the research results reported within this poster. The authors thank Rocı́o Mercado, Sam Goldman, Wenhao Gao, and Lagnajit Pattanaik for providing helpful suggestions regarding the content and presentation of this paper. A APPENDIX CONTENTS A Appendix 15 A.1 Overview of definitions, terms, and notations . . . . . . . . . . . . . . . . . . . . 16 A.2 Scoring rotatable bonds and stereochemistry . . . . . . . . . . . . . . . . . . . . . 18 A.3 Random examples of generated 3D molecules . . . . . . . . . . . . . . . . . . . . 20 A.4 Generation statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 A.5 Shape-constrained molecular optimization . . . . . . . . . . . . . . . . . . . . . . 24 A.5.1 Genetic algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 A.5.2 Visualization of optimized molecules . . . . . . . . . . . . . . . . . . . . 24 A.6 Comparing simS to ROCS scoring function . . . . . . . . . . . . . . . . . . . . . 27 A.7 Exploring different values of α in simS . . . . . . . . . . . . . . . . . . . . . . . 28 A.8 Heuristic bonding geometries and their impact on global shape . . . . . . . . . . . 29 A.9 Ablating equivariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 A.10 Auxiliary training losses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 A.11 Overview of Vector Neurons (VN) operations . . . . . . . . . . . . . . . . . . . . 33 A.12 Graph neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 A.13 Fragment library. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 A.14 Model parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 A.15 Additional training details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 A.16 Additional generation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 A.17 Relaxation of generated geometries . . . . . . . . . . . . . . . . . . . . . . . . . . 43 A.18 Comparison to LigDream (Skalic et al., 2019) . . . . . . . . . . . . . . . . . . . . 45 A.1 OVERVIEW OF DEFINITIONS, TERMS, AND NOTATIONS A.2 SCORING ROTATABLE BONDS AND STEREOCHEMISTRY Recall that our goal is to train the scorer to predict max ψ′l+2,ψ ′ l+3,... sim∗S(G ′(ψfoc),GS) for various query dihedrals ψ′l+1 = ψfoc. That is, we wish to predict the maximum possible shape similarity of the final molecule M ′ to MS when fixing ψ′l+1 = ψfoc and optimally rotating all the yet-to-be-scored (or generated) rotatable bond dihedrals ψ′l+2, ψ ′ l+3, ... so as to maximize sim ∗ S(G ′(ψfoc),GS). Training. We train the scorer independently from the graph generator (with a parallel architecture) using a mean squared error loss between the predicted scores ŝ(ψfoc)dec, scorer = sigmoid(MLP(z (ψfoc) dec, scorer)) and the regression targets s(ψfoc) for Ns different query dihedrals ψfoc ∈ [−π, π): Lscorer = 1 Ns Ns∑ i=1 (s(ψ (i) foc ) − ŝ(ψ (i) foc ) dec, scorer) 2 (14) Computing regression targets. When training with teacher forcing (M ′l = MSl , G′ = GS), we compute regression targets sψfoc ≈ max ψl+2,ψl+3,... sim∗S(G ′(ψfoc),GS) by setting the focal dihedral ψl+1 = ψfoc, sampling Nψ conformations of the “future” graph GTfoc induced by the subtree Tfoc whose root (sub)tree-node is the focus, and computing sψfoc = max i=0,...,Nψ sim∗S(G (i) Tfoc ,GSTfoc ;α = 2.0). Since we fix bonding geometries, we need only sample Nψ sets of dihedrals of the rotatable bonds in GSTfoc to sample Nψ conformers, making this conformer enumeration very fast. Note that rather than using α = 0.81 in these regression targets, we use α = 2.0 to make the scorer more sensitive to shape differences (App. A.7). When computing regression targets, we use Nψ < 1800 and select 36 (evenly spaced)ψfocus ∈ [−π, π) per rotatable bond. Figure 4 visualizes how regression targets are computed. App. A.15 contains further training specifics. Scoring stereochemistry. At generation time, we also enumerate all possible stereoisomers of the focus (except cis/trans bonds) and score each stereoisomer separately, ultimately selecting the (stereoisomer, ψfoc) pair that maximizes the predicted score. Figure 5 illustrates how we enumerate stereoisomers. Note that although we use the learned scoring function to score stereoisomerism at generation time, we do not explicitly train the scorer to score different stereoisomers. Masking severe steric clashes. At generation time, we do not score any query dihedral ψfoc that causes a severe steric clash (< 1Å) with the existing partially generated structure (unless all query dihedrals cause a severe clash). A.3 RANDOM EXAMPLES OF GENERATED 3D MOLECULES Figures 6 and 7 show additional random examples of molecules generated by SQUID when sampling Nmax = 1, 20 molecules with simG(M ′,MS) < 0.7 from the prior (λ = 1.0) or λ = 0.3 and selecting the sample with the highest simS(M ′,MS). Note that the visualized poses of the generated conformers are those which are directly generated by SQUID; the generated conformers have not been explicitly aligned to MS (e.g., using ROCS). Even so, the conformers are (for the most part) aligned toMS since SQUID’s equivariance enables the model to generate natively aligned structures. It is apparent in these examples that using larger Nmax yields molecules with significantly improved shape similarity to MS , both qualitatively and quantitatively. This is in part caused by: 1) stochasticity in the variationally sampled atom embeddings Hvar; 2) stochasticity in the input molecular point clouds, which are sampled from atom-centered isotropic Gaussians in R3; 3) sampling sets of variational atom embeddings that may not be entirely self-consistent (e.g., for instance, if we sample only 1 atom embedding that implicitly encodes a ring structure); and 4) the choice of τ∅, the threshold for stopping local generation. While a small τ∅ (we use τ∅ = 0.01) helps prevent the model from adding too many atoms or fragments around a single focus, a small τ∅ can also lead to early (local) stoppage, yielding molecules that do not completely fill the target shape. By sampling more molecules (using largerNmax), we have more chances to avoid these adverse random effects. Further work will attempt to improve the robustness of the encoding scheme and generation procedure in order to increase SQUID’s overall sample efficiency. A.4 GENERATION STATISTICS Table 4 reports the percentage of molecules that are chemically valid, novel, and unique when sampling 50 molecules from the prior (λ = 1.0) for 1000 encoded molecules MS (e.g., target shapes) from the test set, yielding a total of 50K generated molecules. We define chemical validity as passing RDKit sanitization. Since we directly generate the molecular graph and mask actions which violate chemical valency, 100% of generated molecules are valid. We define novelty as the percentage of generated molecules whose molecular graphs are not present in the training data. We define uniqueness as the percentage of generated molecular graphs (of the 50K total) that are only generated once. For novelty and uniqueness calculations, we consider different stereoisomers to have the same molecular graph. We also report the percentage of generated 3D structures that have an apparent steric clash, defined to be a non-bonded interatomic distance below 2Å. When sampling from the prior (λ = 1.0), the average internal chemical similarity of the generated molecules is 0.26± 0.04. When sampling with λ = 0.3, the average internal chemical similarity is 0.32 ± 0.07. We define internal chemical similarity to be the average pairwise chemical similarity (Tanimoto fingerprint similarity) between molecules that are generated for the same target shape. Table 5 reports the graph reconstruction accuracy when sampling 3D molecules from the posterior (λ = 0.0), for 1000 target molecules MS from the test set. We report the top-k graph reconstruction accuracy (ignoring stereochemical differences) when sampling k = 1 molecule per encoded MS , and when sampling k = 20 molecules per encoded MS . Since we have intentionally trained SQUID inside a shape-conditioned variational autoencoder framework in order to generate chemically diverse molecules with similar 3D shapes, the significance of graph reconstruction accuracy is debatable in our setting. However, it is worth noting that the top-1 reconstruction accuracy is 16.3%, while the top-20 reconstruction accuracy is much higher (57.2%). This large difference is likely attributable to both stochasticity in the variational atom embeddings and stochasticity in the input 3D point clouds. A.5 SHAPE-CONSTRAINED MOLECULAR OPTIMIZATION A.5.1 GENETIC ALGORITHM We adapt SQUID to shape-constrained molecular optimization by implementing a genetic algorithm on the variational atom embeddings Hvar. Algorithm 1 details the exact optimization procedure. In summary, given the seed molecule MS with a target 3D shape and an initial substructure M0 (which is contained by all generated molecules for a given MS), we first generate an initial population of generated molecules M ′ by repeatedly sampling Hvar for various interpolation factors λ, mixing these Hvar with the encoded shape features of MS , and decoding new 3D molecules. We only add a generated molecule to the population if simS(M ′,MS) ≥ τS (we use τS = 0.75), so that the GA does not overly explore regions of chemical space that have no chance of satisfying the ultimate constraint simS(M ′,MS) ≥ 0.85. After generating the initial population, we iteratively 1) select the top-scoring samples in the population, 2) cross the top-scoring Hvar in crossover events, 3) mutate the top and crossed Hvar via adding random noise, and 4) generate new molecules M ′ for each mutated Hvar. The final optimized molecule M∗ is the top-scoring generated molecule that satisfies the shape-similarity constraint simS(M ′,MS) ≥ 0.85. A.5.2 VISUALIZATION OF OPTIMIZED MOLECULES Figure 8 visualizes the structures of the SQUID-optimized molecules M∗ and their respective seed molecules MS (e.g., the starting “hit” molecules with target shapes) for each of the optimization tasks which led to an improvement in the objective score. We also overlay the generated 3D conformations of M∗ on those of MS , and report the objective scores for each M∗ and MS . Algorithm 1 Genetic algorithm for shape-constrained optimization with SQUID Given: MS with nH heavy atoms,M0, objective oracle O Params: τS , τG, Ne, NT , Nc ▷ Defaults: τS = 0.75, τG = 0.95, Ne = 20, NT = 20, Nc = 10 Hµ,Hσ = Encode(MS) ▷ Encode target molecule Initialize population P = {(MS ,Hµ)} for λ ∈ [0.0, 0.2, 0.4, 0.6, 0.8, 1.0] do ▷ Create initial population of (M ′,Hvar) for i = 1, ..., 100 do Sample noise ϵ ∈ R(nH×dh) ∼ N(0,1) Hvar = (1− λ)Hµ + ϵ⊙ ((1− λ)Hσ + λ1) ▷ Mutate variational atom embeddings z, Z̃ = Encode(MS ;Hvar) ▷ Mix mutated chemical and shape information M ′ = Decode(M0, z, Z̃) ▷ Generate mutated molecule M ′ Compute simS(M ′,MS) if simS(M ′,MS) >= τS then ▷ Add M ′ to population only if simS(M ′,MS) is high Add (M ′,Hvar) to P end if end for end for for e = 1, ..., Ne do ▷ For each evolution Construct Psorted by sorting P by O(M) ▷ Sort population by objective score, high to low. Initialize TM = {}, THvar = {} for (M,Hvar) ∈ Psorted do ▷ Collect top-NT scoring (M ′,Hvar) if (simG(M,MT ) < τG ∀ MT ∈ TM ) and (|T | < NT ) then Add M to TM Add Hvar to THvar end if end for Initialize TC = {} for c = 1, ..., Nc do ▷ Add crossovers to set of top-scoring Hvar Sample Hi ∈ THvar , Hj ̸=i ∈ THvar Hc = CROSS(Hi,Hj) ▷ Cross by randomly swapping half of the atom embeddings Add Hc to TC end for THvar = THvar ∪ TC for Hvar ∈ THvar do ▷ Adding to population for λ ∈ [0.0, 0.2, 0.4, 0.6, 0.8, 1.0] do for i = 1, ..., 10 do Sample noise ϵ ∈ R(nH×dh) ∼ N(0,1) Hvar = (1− λ)Hvar + ϵ ▷ Mutate variational atom embeddings z, Z̃ = Encode(MS ;Hvar) ▷ Mix mutated chemical and shape information M ′ = Decode(M0, z, Z̃) ▷ Generate mutated molecule M ′ Compute simS(M ′,MS) if simS(M ′,MS) >= τS then ▷ Add M ′ to P only if simS(M ′,MS) is high Add (M ′,Hvar) to P end if end for end for end for end for return M∗ = argmax M ′∈P O(M ′) subject to simS(M ′,MS) >= 0.85 A.6 COMPARING SIMS TO ROCS SCORING FUNCTION Our shape similarity function described in Equation 1 closely approximates the shape (only) scoring function employed by ROCS, when α = 0.81. Figure 9 demonstrates the near-perfect correlation between our computed shape scores and those computed by ROCS for 50,000 shape comparisons, with a mean absolute error of 0.0016. Note that Equation 1 computes non-aligned shape similarity. We still employ ROCS to align the generated molecules M ′ to the target molecule MS before computing their (aligned) shape similarity in our experiments. However, we do not require explicit alignment when training SQUID; we do not use the commercial ROCS program during training. A.7 EXPLORING DIFFERENT VALUES OF α IN SIMS Our analysis of shape similarity thus far has used Equation 1 with α = 0.81 in order to recapitulate the shape similarity function used by ROCS, which is widely used in drug discovery. However, compared to randomly sampled molecules in the dataset, the molecules generated by SQUID qualitatively appear to do a significantly better job at fitting the target shape S on an atom-by-atom basis, even if the computed shape similarities (with α = 0.81) are comparable (see examples in Figure 3). We quantify this observation by increasing the value of α when computing simS(M ′,MS ;α) for generated molecules M ′, as α is inversely related to the width of the isotropic 3D Gaussians used in the volume overlap calculations in Equation 1. Intuitively, increasing α will greater penalize simS if the atoms of M ′ and MS do not perfectly align. Figure 10 plots the mean simS(M,MS ;α) for the most shape-similar molecule M of Nmax sampled molecules M ′ for increasing values of α. Averages are calculated over 1000 target molecules MS from the test set, and we only consider generated molecules for which simG(M ′,MS) < 0.7. Crucially, the gap between the mean simS(M,MS ;α) obtained by generating molecules with SQUID vs. randomly sampling molecules from the dataset significantly widens with increasing α. This effect is especially apparent when using SQUID with λ = 0.3 and Nmax = 20, although can be observed with other generation strategies as well. Hence, SQUID does a much better job at generating (still chemically diverse) molecules that have significant atom-to-atom overlap with MS . A.8 HEURISTIC BONDING GEOMETRIES AND THEIR IMPACT ON GLOBAL SHAPE In all molecules (dataset and generated) considered in this work, we fix acyclic bond distances to their empirical averages and set acyclic bond angles to heuristic values based on hybridization rules in order to reduce the degrees of freedom in 3D coordinate generation. Here, we describe how we fix these bonding geometries and explore whether this local 3D structure manipulation significantly alters the global molecular shape. Fixing bonding geometries. We fix acyclic bond distances by computing the mean bond distance between pairs of atom types across all the RDKit-generated conformers in our training set. After collecting these empirical mean values, we manually set each acyclic bond distance to its respective mean value for each conformer in our datasets. We set acyclic bond angles using simple hybridization rules. Specifically, sp3-hybridized atoms will have bond angles of 109.5◦, sp2-hybridized atoms will have bond angles of 120◦, and sp-hybridized atoms will have bond angles of 180◦. We manually fix the acyclic bond angles to these heuristic values for all conformers in our datasets. We use RDKit to determine the hybridization states of each atom. During generation, occasionally the hybridization of certain atoms (N, O) may change once they are bonded to new neighbors. For instance, an sp3 nitrogen can become sp2 once bonded to an aromatic ring. We adjust bond angles on-the-fly in these edge cases. Impact on global shape. Figure 11 plots the histogram of simS(Mfixed,Mrelaxed) for 1000 test set conformers Mfixed whose bonding geometries have been fixed, and the original RDKitgenerated conformers Mrelaxed with relaxed (true) bonding geometries. In the vast majority of cases, fixing the bonding geometries negligibly impacts the global shape of the 3D molecule (simS(Mfixed,Mrelaxed) ≈ 1). This is because the main factor influencing global molecular shape is rotatable bonds (e.g., flexible dihedrals), which are not altered by fixing bond distances and angles. Recovering refined bonding geometries. Even though fixing bond distances and angles only marginally impacts molecular shape, we still may wish to recover refined bonding geometries of the generated 3D molecules without altering the generated 3D shape. We can accomplish this (to a first approximation) for generated molecules by creating a geometrically relaxed conformation of the generated molecular graph with RDKit, and then manually setting the dihedrals of the rotatable bonds in the relaxed conformer to match the corresponding dihedrals in the generated conformers. Importantly, if we perform this relaxation procedure for both the dataset molecules and the SQUID-generated molecules, the (relaxed) generated molecules still have significantly enriched shape-similarity to the (relaxed) target shape compared to (relaxed) random molecules from the dataset (Fig. 12). A.9 ABLATING EQUIVARIANCE SQUID aligns the equivariant representations of the encoded target shape and the partially generated structures in order to generate 3D conformations that natively fit the target shape, without having to implicitly learn SE(3)-alignments (Challenge 2). We achieve this in Equation 7, where we mix the equivariant representations of MS and the partially generated structure M ′(c−1) l . To empirically motivate this design choice, we ablate the equivariant alignment by setting Z̃ = 0 in Eq. 7. We denote this ablated model as SQUID-NoEqui. Note that because we still pass the unablated invariant features z to the decoder (Eq. 8), SQUID-NoEqui is still conditioned on the shape of MS — the model simply no longer has access to any explicit information about the relative spatial orientation of M ′(c−1)l to MS (and thus must learn this spatial relationship from scratch). As expected, ablating SQUID’s equivariance significantly reduces SQUID’s ability to generate chemically diverse molecules that fit the target shape. Figure 13 plots the distributions of simS(M ′,MS) for the best of Nmax generated molecules with simG(M ′,MS) < 0.7 or 0.3 when using SQUID or SQUID-NoEqui. Crucially, the mean shape similarity when sampling with (λ = 1.0, Nmax = 20, simG(M ′,MS) < 0.7) decreases from 0.828 (SQUID) to 0.805 (SQUID-NoEqui). When sampling with (λ = 0.3, Nmax = 20, simG(M ′,MS) < 0.7), the mean shape similarity also decreases from 0.879 (SQUID) to 0.839 (SQUID-NoEqui). Relative to the mean shape similarity of 0.758 achieved by sampling random molecules from the dataset (Nmax = 20, simG(M ′,MS) < 0.7), this corresponds to a substantial 33% reduction in the shape-enrichment of SQUID-generated molecules. Interestingly, sampling (λ = 1.0, Nmax = 20, simG(M ′,MS) < 0.7) with SQUID-NoEqui still yields shape-enriched molecules compared to analogously sampling random molecules from the dataset (mean shape similarity of 0.805 vs. 0.758). This is because even without the equivariant feature alignment, SQUID-NoEqui still conditions molecular generation on the (invariant) encoding of the target shape S, and hence biases generation towards molecules which better fit the target shape (after alignment with ROCS). A.10 AUXILIARY TRAINING LOSSES We employ two auxiliary losses when training the graph generator in order to encourage the generated graphs to better fit the encoded target shape. The first auxiliary loss penalizes the graph generator if it adds an incorrect atom/fragment to the focus that is of significantly different size than the correct (ground truth) atom/fragment. We first compute a matrix ∆Vf ∈ R (|Lf |×|Lf |) + containing the (pairwise) volume difference between all atoms/fragments in the library Lf ∆V (i,j) f = |vfi − vfj | (15) where vfi is the volume of atom/fragment fi ∈ Lf (computed with RDKit). We then compute the auxiliary loss Lnext-shape as: Lnext-shape = 1 |Lf | (pnext ·∆V(g)f ) (16) where g is the index of the correct (ground truth) next atom/fragment fnext, true, ∆V (g) f is the gth row of ∆Vf , and pnext are the predicted probabilities over the next atom/fragment types to be connected to the focus (see Eq. 10). The second auxiliary loss penalizes the graph generator if it prematurely stops (local) generation, with larger penalties if the premature stop would result in larger portions of the (ground truth) graph not being generated. When predicting (local) stop tokens during graph generation (with teacher forcing), we compute the number of atoms in the subgraph induced by the subtree whose root treenode is the next atom/fragment to be added to the focus (in the current generation sequence). We then multiply the predicted probability for the local stop token by this number of “future” atoms that would not be generated if a premature stop token were generated. Hence, if the correct action is to indeed stop generation around the focus, the penalty will be zero. However, if the correct action is to add a large fragment to the current focus but the generator predicts a stop token, the penalty will be large. Formally, we compute: L∅-shape = p∅|GSTnext | if p∅, true = 0 otherwise 0 (17) where p∅, true is the ground truth action for local stopping (p∅, true = 0 indicates that the correct action is to not stop local generation), and GSTnext is the subgraph induced by the subtree whose root node is the next atom/fragment (to be generated) in the ground-truth molecular graph. A.11 OVERVIEW OF VECTOR NEURONS (VN) OPERATIONS In this work, we use Deng et al. (2021)’s VN-DGCNN to encode molecular point clouds into equivariant shape features. We also employ their general VN operations (VN-MLP, VN-Inv) during shape and chemical feature mixing. We refer readers to Deng et al. (2021) for a detailed description of these equivariant operations and models. Here, we briefly summarize some relevant VN-operations for the reader’s convenience. VN-MLP. Vector neurons (VN) lift scalar neuron features to vector features in R3. Hence, instead of having features x ∈ Rq , we have vector features X̃ ∈ Rq×3. While linear transformations are naturally equivariant to global rotations R since W(X̃R) = (WX̃)R for some rotation matrix R ∈ R3×3, Deng et al. (2021) construct a set of non-linear equivariant operations f̃ such that f̃(X̃R) = f̃(X̃)R, thereby enabling natively equivariant network design. VN-MLPs combine linear transformations with equivariant activations. In this work, we use VNLeakyReLU, which Deng et al. (2021) define as: VN-LeakyReLU(X̃;α) = αX̃+ (1− α)VN-ReLU(X̃) (18) where VN-ReLU(X̃) = x̃, if x̃ · k̃ ||k̃|| ≥ 0 x̃− (x̃ · k̃||k̃|| ) k̃ ||k̃|| otherwise ∀ x̃ ∈ X̃ (19) where k̃ = UX̃ for a learnable weight matrix U ∈ R1×q , and where x̃ ∈ R3. By composing series of linear transformations and equivariant activations, VN-MLPs map X̃ ∈ Rq×3 to X̃′ ∈ Rq′×3 such that X̃′R = VN-MLP(X̃R). VN-Inv. Deng et al. (2021) also define learnable operations that map equivariant features X̃ ∈ Rq×3 to invariant features x ∈ R3q . In general, VN-Inv constructs invariant features by multiplying equivariant features X̃ with other equivariant features Ỹ ∈ R3×3 : X̂ = X̃Ỹ⊤ (20) The invariant features X̂ ∈ Rq×3 can then be reshaped into standard invariant features x ∈ R3q . In our work, we slightly modify Deng et al. (2021)’s original formulation. Given a set of equivariant features X̃ = {X̃(i)} ∈ Rn×q×3, we define a VN-Inv as: VN-Inv(X̃) = X (21) where X = {x(i)} ∈ Rn×6q and: x(i) = Flatten(Ṽ(i)T̃⊤i ) (22) Ṽ(i) = (X̃(i), ∑ i X̃(i)) if n > 1 otherwise X̃(i) (23) T̃i = VN-MLP(Ṽ(i)) (24) where T̃i ∈ R3×3, and Ṽ(i) ∈ R2q×3 (n > 1) or Ṽ(i) ∈ Rq×3 (n = 1). VN-DGCNN. Deng et al. (2021) introduce VN-DGCNN as an SO(3)-equivariant version of the Dynamic Graph Convolutional Neural Network (Wang et al., 2019). Given a point cloud P ∈ Rn×3, VN-DGCNN uses (dynamic) equivariant edge convolutions to update equivariant per-point features: Ẽ(t+1)nm = VN-LeakyReLU (t)(Θ(t)(X̃(t)m − X̃(t)n ) +Φ(t)X̃(t)n ) (25) X̃(t+1)n = ∑ m∈KNNf (n) Ẽ(t+1)nm (26) where KNNf (n) are the k-nearest neighbors of point m in feature space, Φ(t) and Θ(t) are weight matrices, and X̃(t)n ∈ Rq×3 are the per-point equivariant features. A.12 GRAPH NEURAL NETWORKS In this work, we employ graph neural networks (GNNs) to encode: • each atom/fragment in the library Lf • the target molecule MS • each partial molecular structure M ′(c)l during sequential graph generation • the query structures M ′(ψfoc)l+1 when scoring rotatable bonds Our GNNs are loosely based upon a simple version of the EGNN (Satorras et al., 2022b). Given a molecular graph G with atoms as nodes and bonds as edges, we use graph convolutional layers defined by the following: mt+1ij = ϕ t m ( hti,h t j , ||ri − rj ||2,mtij ) (27) mt+1i = ∑ j∈N(i) mij (28) h (t=1) i = ϕ (0) h (h 0 i ,m (t=1) i ) (29) ht+1i = ϕ t h(h t i,m t+1 i ) + h t i (t > 0) (30) where hti are the learned atom embeddings at each GNN-layer, m t ij are learned (directed) messages, ri ∈ R3 are the coordinates of atom i, N(i) is the set of 1-hop bonded neighbors of atom i, and each ϕtm, ϕ t h are MLPs. Note that h 0 i are the initial atom features, and m 0 ij are the initial bond features for the bond between atoms i and j. In general, mtij ̸= mtji for t > 0, but here m0ij = m0ji. Note that since we only aggregate messages from directly bonded neighbors, ||ri − rj || only encodes bond distances, and does not encode any information about specific 3D conformations. Hence, our GNNs effectively only encode 2D chemical identity, as opposed to 3D shape. A.13 FRAGMENT LIBRARY. Our atom/fragment library Lf includes 100 distinct fragments (Fig. 14) and 24 unique atom types. The 100 fragments were selected based on the top-100 most frequently occurring fragments in our training set. In this work, we specify fragments as ring-containing substructures that do not contain any acyclic single bonds. However, in principle fragments could be any (valid) chemical substructure. Note that we only use 1 (geometrically optimized) conformation per fragment, which is assumed to be rigid. Hence, in its current implementation, SQUID does not consider different ring conformations (e.g., boat vs. chair conformations of cyclohexane). A.14 MODEL PARAMETERS Parameter sharing. For both the graph generator and the rotatable bond score, the (variational) molecule encoder (in the Encoder, Fig. 2) and the partial molecule encoder (in the Decoder, Fig. 2) share the same fragment encoder (Lf -GNN), which is trained end-to-end with the rest of the model. Apart from Lf -GNN, these encoders do not share any learnable parameters, despite having parallel architectures. The graph generator and the rotatable bond scorer are completely independent, and are trained separately. Hyperparameters. Tables 6 and 7 tabulate the set of hyperparameters used for SQUID across all the experiments conducted in this paper. Table 8 summarizes training and generation parameters, but we refer the reader to App. A.15 and A.16 for more detailed discussion of training and generation protocols. Because of the large hyperparameter search space and long training times, we did not perform extensive hyperparameter optimizations. We manually tuned the learning rates and schedulers to maintain training stability, and we maxed-out batch sizes given memory constraints. We set β∅-shape = 10 and βnext-shape = 10 to make the magnitudes of L∅-shape and Lnext-shape comparable to the other loss components for graph-generation. We slowly increase βKL over the course of training from 10−5 to a maximum of 10−1, which we found to provide a reasonable balance between LKL and graph reconstruction. Generation parameters A.15 ADDITIONAL TRAINING DETAILS Dataset. We use molecules from MOSES (Polykovskiy et al., 2020) to train, validate, and test SQUID. Starting from the train/test sets provided by MOSES, we first generate an RDKit conformer for each molecule, and remove any molecules for which we cannot generate a conformer. Conformers are initially created with the ETKDG algorithm in RDKit, and then separately optimized for 200 iterations with the MMFF force field. We then fix the acyclic bond distances and bond angles for each conformer (App. A.8). Using the molecules from MOSES’s train set, we then create the fragment library by extracting the top-100 most frequently occurring fragments (ring-containing substructures without acylic bonds). We separately generate a 3D conformer for each distinct fragment, optimizing the fragment structures with MMFF for 1000 steps. Given these 100 fragments, we then remove all molecules from the train and test sets containing non-included fragments. From the filtered training set, we then extract 24 unique atom types, which we add to the atom/fragment library Lf . We remove any molecule in the test set that contains an atom type not included in these 24. Finally, we randomly split the (filtered) training set into separate training/validation splits. The training split contains 1058352 molecules, the validation split contains 264589 molecules, and the test set contains 146883 molecules. Each molecule has one conformer. Collecting training data for graph generation and scoring. We individually supervise each step of autoregressive graph generation and use teacher forcing. We collect the ground-truth generation actions by representing each molecular graph as a tree whose root tree-node is either a terminal atom or a terminal fragment in the graph. A “terminal” atom is only bonded to one neighboring atom. A “terminal” fragment has only one acyclic (rotatable) bond to a neighboring atom/fragment. Starting from this terminal atom/fragment, we construct the molecule according to a breadth-firstsearch traversal of the generation tree (see Fig. 2); we break ties using RDKit’s canonical atom ordering. We augment the data by enumerating all generation trees starting from each possible terminal atom/fragment in the molecule. For each rotatable bond in the generation trees, we collect regression targets for training the scorer by following the procedure outlined in App. A.2. Batching. When training the graph generator, we batch together graph-generative actions which are part of the same generation sequence (e.g., generating G′(c)l from G ′(c−1) l ). Otherwise, generation sequences are treated independently. When training the rotatable bond scorer, we batch together different query dihedrals ψfoc of the same focal bond. Rather than scoring all 36 rotation angles in the same batch, we include the ground-truth rotation angle and randomly sample 9/35 others to include in the batch. Within each batch (for both graph-generation and scoring), all the encoded molecules MS are constrained to have the same number of atoms, and all the partial molecular structures G ′(c) l are constrained to have the same number of atoms. This restriction on batch composition is purely for convenience: the public implementation of VN-DGCNN from Deng et al. (2021) is designed to train on point clouds with the same number of points, and we construct point clouds by sampling a (fixed) np points for each atom. Training setup. We train the graph generator and the rotatable bond scorer separately. For the graph generator, we train for 2M iterations (batches), with a maximum batch size of 400 (generation sequences). We use the Adam optimizer with default parameters. We use an initial learning rate of 2.5 × 10−4, which we exponentially decay by a factor of 0.9 every 50K iterations to a minimum of 5 × 10−6. We weight the auxiliary losses by βnext-shape = 10.0 and β∅-shape = 10.0. We log-linearly increase βKL from 10−5 to 10−1 over the first 1M iterations, after which it remains constant at 10−1. For each generation sequence, we randomize the rotation angle of the bond connecting the focus to the rest of the partial graph (e.g., the focal dihedral), as this dihedral has yet to be scored. In order to make the graph generator more robust to imperfect rotatable bond scoring at generation time, during training, we perturb the dihedrals of each rotatable bond in the partially generated structure M ′l by δψ ∼ N(µ = 0◦, σ = 15◦) while fixing the coordinates of the focus. For the rotatable bond scorer, we train for 2M iterations (baches), with a maximum batch size of 32 (focal
1. What is the main contribution of the paper regarding generating sets of molecules that match an input 3D "shape"? 2. What are the strengths and weaknesses of the proposed method, particularly in its technical aspects and presentation? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or omissions in the paper regarding the restriction of chemical space and the combinatorial chemical landscape? 5. Can the method learn to enter rings and keep generating ring atoms until they close, or increase the size of fragments dramatically to include all possible single, double, and triple rings? 6. How would the time for generation of a molecule scale if one increased the size of the fragment library by a factor of 2, 5, 10, 100? 7. Does the model drop the dependence on the initial fragment by learning the distribution of such initial fragments in the training set and learning to condition their initial placements on the shape? 8. Is there a missing flexibility in the current wave of neural-network-based acceleration of chemoinformatics tasks, and are there any generic ways to go around them to help build flexible building blocks?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper describes a method that can generate a sets of molecules that approximately match an input 3D "shape". The method employs an autoregressive encoder-decoder architecture that learns a representation of the input shape (VN-DGCNN) and partial graph and decodes an updated molecule graph with an additional atom or fragment at each step (inspired by MoLeR). The method use a number of useful heuristics for bonds and angles to limit learning to the rotatable dihedral angles of the small molecule graphs. The authors demonstrate their model on a number of shape-conditioned tasks. Strengths And Weaknesses The paper is a solid contribution to a reasonably recent area of exploration, the conditional generation of small molecules with 3D coordinates. The technical aspects of the method are well thought out and described in a clear fashion, though I list a couple of remaining questions and omissions below. The authors demonstrate their method on a reasonable number of synthetic examples, and document its level of success. One weakness in the presentation of this paper is the lack of a clear explanation of the rationale and implications from the restriction of chemical space to the combinatorial chemical landscape defined by the chosen set of 100 fragments and their non-ring linkers. I understand the need to demonstrate a method quickly, and perhaps this was the main motivation, however, while reading this work I kept wondering if perhaps the cost of a larger search tree perhaps becomes prohibitive and renders the method impractical. If that's not the case, would it be possible to simply generate molecules atom-by-atom, similar to previous 3D generation models, and let the model learn to enter rings and keep generating ring atoms until they close and so on? Alternatively, could one increase the size of fragments dramatically to include all possible single, double, and triple rings in any of the currently available patents? Would such a change degrade the reconstruction quality? Approximately how would the time for generation of a molecule (currently listed as 2--3 seconds of walltime) scale if one increased the size of the fragment library by a factor of 2, 5, 10, 100? In terms of coordinates, the model is only learning to generate the rotatable dihedrals, which sounds like a rational way to evaluate a shape-matching model. However, I wonder how much strain these dihedrals undergo in the final proposed conformation. Instead of using the procedure to generate Figure 12, could the authors find the local minimum of the MMFF (the force field they used for generating the input conformations) and show the statistics of the shape similarity for these relaxed configurations? Could the model drop the dependence on the initial fragment by learning the distribution of such initial fragments in the training set and learning to condition their initial placements on the shape? I'd imagine that this problem is much easier than learning how to construct a reasonable search tree, and if it is harder than I imagine, then perhaps the authors could use an existing conditional pose generator, deep (or shallow) docking tool, or something else that is efficient as a starting point. In that way, there will be no need to disclose any information about the test molecule that generated the shape for the final shape-matching task. A confusing mistake in the paper is the improper description of the work in the pocket2mol paper, which also conditions on 3D inputs (the pocket) and generates complete molecules (not only fragments). I strongly recommend that the authors improve their otherwise clear review of the related literature at the time of their submission, and amend the relevant clause in the second sentence of the abstract. Finally, a question that doesn't need to be addressed in the paper, however, I would appreciate any thoughts that the authors might have on the subject below, and perhaps their answer could improve the paper after all. The method that the authors use to align the molecules, ROCS, also has a GPU implementation that is reasonably scalable and efficient, and could thus render other, less specialized algorithms competitive (e.g. a traditional genetic algorithm that mixes graphs to iteratively optimize the shape scores in an ensemble, or a naive iterative trainable 3D molecule generator that scores and filters partial molecule graphs.) Additionally, ROCS also can align and score molecules using both shape and color (chemical features)---I expect that a generalization that addresses color in addition to shape would be relevant in a practical drug discovery setting. It would be trivial to modify an ensemble optimization algorithm to score via shape+color ROCS, however, typical ML methods would require training from scratch. Is there a missing flexibility in the current wave of neural-network-based acceleration of chemoinformatics tasks and are there any generic ways to go around them to help build flexible building blocks (similar to what VNs did for building SO3 equivariant networks, but now for small molecule drug discovery)? Clarity, Quality, Novelty And Reproducibility The quality, clarity, and originality of this work is high. The authors have provided code and links to the data to ensure reproducibility (though I haven't tested that assumption myself.)
ICLR
Title Equivariant Shape-Conditioned Generation of 3D Molecules for Ligand-Based Drug Design Abstract Shape-based virtual screening is widely used in ligand-based drug design to search chemical libraries for molecules with similar 3D shapes yet novel 2D graph structures compared to known ligands. 3D deep generative models can potentially automate this exploration of shape-conditioned 3D chemical space; however, no existing models can reliably generate geometrically realistic drug-like molecules in conformations with a specific shape. We introduce a new multimodal 3D generative model that enables shape-conditioned 3D molecular design by equivariantly encoding molecular shape and variationally encoding chemical identity. We ensure local geometric and chemical validity of generated molecules by using autoregressive fragment-based generation with heuristic bonding geometries, allowing the model to prioritize the scoring of rotatable bonds to best align the growing conformation to the target shape. We evaluate our 3D generative model in tasks relevant to drug design including shape-conditioned generation of chemically diverse molecular structures and shape-constrained molecular property optimization, demonstrating its utility over virtual screening of enumerated libraries. 1 INTRODUCTION Generative models for de novo molecular generation have revolutionized computer-aided drug design (CADD) by enabling efficient exploration of chemical space, goal-directed molecular optimization (MO), and automated creation of virtual chemical libraries (Segler et al., 2018; Meyers et al., 2021; Huang et al., 2021; Wang et al., 2022; Du et al., 2022; Bilodeau et al., 2022). Recently, several 3D generative models have been proposed to directly generate low-energy or (bio)active molecular conformations using 3D convolutional networks (CNNs) (Ragoza et al., 2020), reinforcement learning (RL) (Simm et al., 2020a;b), autoregressive generators (Gebauer et al., 2022; Luo & Ji, 2022), or diffusion models (Hoogeboom et al., 2022). These methods have especially enjoyed accelerated development for structure-based drug design (SBDD), where models are trained to generate druglike molecules in favorable binding poses inside an explicit protein pocket (Drotár et al., 2021; Luo et al., 2022; Liu et al., 2022; Ragoza et al., 2022). However, SBDD requires atomically-resolved structures of a protein target, assumes knowledge of binding sites, and often ignores dynamic pocket flexibility, rendering these methods less effective in many CADD settings. Ligand-based drug design (LBDD) does not assume knowledge of protein structure. Instead, molecules are compared against previously identified “actives” on the basis of 3D pharmacophore or 3D shape similarity under the principle that molecules with similar structures should share similar activity (Vázquez et al., 2020; Cleves & Jain, 2020). In particular, ROCS (Rapid Overlay of Chemical Structures) is commonly used as a shape-based virtual screening tool to identify molecules with similar shapes to a reference inhibitor and has shown promising results for scaffold-hopping tasks (Rush et al., 2005; Hawkins et al., 2007; Nicholls et al., 2010). However, virtual screening relies on enumeration of chemical libraries, fundamentally restricting its ability to probe new chemical space. Here, we consider the novel task of generating chemically diverse 3D molecular structures conditioned on a molecular shape, thereby facilitating the shape-conditioned exploration of chemical space without the limitations of virtual screening (Fig. 1). Importantly, shape-conditioned 3D molecular generation presents unique challenges not encountered in typical 2D generative models: Challenge 1. 3D shape-based LBDD involves pairwise comparisons between two arbitrary conformations of arbitrary molecules. Whereas traditional property-conditioned generative models or MO algorithms shift learned data distributions to optimize a single scalar property, a shape-conditioned generative model must generate molecules adopting any reasonable shape encoded by the model. Challenge 2. Shape similarity metrics that compute volume overlaps between two molecules (e.g., ROCS) require the molecules to be aligned in 3D space. Unlike 2D similarity, the computed shape similarity between the two molecules will change if one of the structures is rotated. This subtly impacts the learning problem: if the model encodes the target 3D shape into an SE(3)-invariant representation, the model must learn how the generated molecule would fit the target shape under the implicit action of an SE(3)-alignment. Alternatively, if the model can natively generate an aligned structure, then the model can more easily learn to construct molecules that fit the target shape. Challenge 3. A molecule’s 2D graph topology and 3D shape are highly dependent; small changes in the graph can strikingly alter the shapes accessible to a molecule. It is thus unlikely that a generative model will reliably generate chemically diverse molecules with similar shapes to an encoded target without 1) simultaneous graph and coordinate generation; and 2) explicit shape-conditioning. Challenge 4. The distribution of shapes a drug-like molecule can adopt is chiefly influenced by rotatable bonds, the foremost source of molecular flexibility. However, existing 3D generative models are mainly developed using tiny molecules (e.g., fewer than 10 heavy atoms), and cannot generate flexible drug-like molecules while maintaining chemical validity (satisfying valencies), geometric validity (non-distorted bond distances and angles; no steric clashes), and chemical diversity. To surmount these challenges, we design a new generative model, SQUID1, to enable the shapeconditioned generation of chemically diverse molecules in 3D. Our contributions are as follows: • Given a 3D molecule with a target shape, we use equivariant point cloud networks to encode the shape into (rotationally) equivariant features. We then use graph neural networks (GNNs) to variationally encode chemical identity into invariant features. By mixing chemical features with equivariant shape features, we can generate diverse molecules in aligned poses that fit the shape. • We develop a sequential fragment-based 3D generation procedure that fixes local bond lengths and angles to prioritize the scoring of rotatable bonds. By massively simplifying 3D coordinate generation, we generate drug-like molecules while maintaining chemical and geometric validity. • We design a rotatable bond scoring network that learns how local bond rotations affect global shape, enabling our decoder to generate 3D conformations that best fit the target shape. We evaluate the utility of SQUID over virtual screening in shape-conditioned 3D molecular design tasks that mimic ligand-based drug design objectives, including shape-conditioned generation of diverse 3D structures and shape-constrained molecular optimization. To inspire further research, we note that our tasks could also be approached with a hypothetical 3D generative model that disentangles latent variables controlling 2D chemical identity and 3D shape, thus enabling zero-shot generation of topologically distinct molecules with similar shapes to any encoded target. 1SQUID: Shape-Conditioned Equivariant Generator for Drug-Like Molecules 2 RELATED WORK Fragment-based molecular generation. Seminal works in autoregressive molecular generation applied language models to generate 1D SMILES strings character-by-character (Gómez-Bombarelli et al., 2018; Segler et al., 2018), or GNNs to generate 2D molecular graphs atom-by-atom (Liu et al., 2018; Simonovsky & Komodakis, 2018; Li et al., 2018). Recent works construct molecules fragment-by-fragment to improve the chemical validity of intermediate graphs and to scale generation to larger molecules (Podda et al., 2020; Jin et al., 2019; 2020). Our fragment-based decoder is related to MoLeR (Maziarz et al., 2022), which iteratively generates molecules by selecting a new fragment (or atom) to add to the partial graph, choosing attachment sites on the new fragment, and predicting new bonds to the partial graph. Yet, MoLeR only generates 2D graphs; we generate 3D molecular structures. Beyond 2D generation, Flam-Shepherd et al. (2022) use an RL agent to generate 3D molecules by sampling and connecting molecular fragments. However, they sample from a small multiset of fragments, restricting the accessible chemical space. Powers et al. (2022) use fragments to generate 3D molecules inside a protein pocket, but only consider 7 distinct rings. Generation of drug-like molecules in 3D. In this work, we generate novel drug-like 3D molecular structures in free space, e.g., not conformers given a known molecular graph (Ganea et al., 2021; Jing et al., 2022). Myriad models have been proposed to generate small 3D molecules such as E(3)-equivariant normalizing flows and diffusion models (Satorras et al., 2022a; Hoogeboom et al., 2022), RL agents with an SE(3)-covariant action space (Simm et al., 2020b), and autoregressive generators that build molecules atom-by-atom with SE(3)-invariant internal coordinates (Luo & Ji, 2022; Gebauer et al., 2022). However, fewer 3D generative models can generate larger drug-like molecules for realistic chemical design tasks. Of these, Hoogeboom et al. (2022) and Arcidiacono & Koes (2021) fail to generate chemically valid molecules, while Ragoza et al. (2020) rely on postprocessing and geometry relaxation to extract stable molecules from their generated atom density grids. Only Roney et al. (2021) and Li et al. (2021), who develop autoregressive generators that simultaneously predict graph structure and internal coordinates, have shown to reliably generate valid drug-like molecules. We also couple graph generation with 3D coordinate prediction; however, we employ fragment-based generation with fixed local geometries to ensure local chemical and geometric validity. Futher, we focus on shape-conditioned molecular design; none of these works can natively address the aforementioned challenges posed by shape-conditioned molecular generation. Shape-conditioned molecular generation. Other works partially address shape-conditioned 3D molecular generation. Skalic et al. (2019) and Imrie et al. (2021) train networks to generate 1D SMILES strings or 2D molecular graphs conditioned on CNN encodings of 3D pharmacophores. However, they do not generate 3D structures, and the CNNs do not respect Euclidean symmetries. Zheng et al. (2021) use supervised molecule-to-molecule translation on SMILES strings for scaffold hopping tasks, but do not generate 3D structures. Papadopoulos et al. (2021) use REINVENT (Olivecrona et al., 2017) on SMILES strings to propose molecules whose conformers are shapesimilar to a target, but they must re-optimize the agent for each target shape. Roney et al. (2021) fine-tune a 3D generative model on the hits of a ROCS virtual screen of > 1010 drug-like molecules to shift the learned distribution towards a target shape. Yet, this expensive screening approach must be repeated for each new target. Instead, we seek to achieve zero-shot generation of 3D molecules with similar shapes to any encoded shape, without requiring fine-tuning or post facto optimization. Equivariant geometric deep learning on point clouds. Various equivariant networks have been designed to encode point clouds for updating coordinates in R3 (Satorras et al., 2022b), predicting tensorial properties (Thomas et al., 2018), or modeling 3D structures natively in Cartesian space (Fuchs et al., 2020). Especially noteworthy are architectures which lift scalar neuron features to vector features in R3 and employ simple operations to mix invariant and equivariant features without relying on expensive higher-order tensor products or Clebsh-Gordan coefficients (Deng et al., 2021; Jing et al., 2021). In this work, we employ Deng et al. (2021)’s Vector Neurons (VN)-based equivariant point cloud encoder VN-DGCNN to encode molecules into equivariant latent representations in order to generate molecules which are natively aligned to the target shape. Two recent works also employ VN operations for structure-based drug design and linker design (Peng et al., 2022; Huang et al., 2022). Huang et al. (2022) also build molecules in free space; however, they generate just a few atoms to connect existing fragments and do not condition on molecular shape. 3 METHODOLOGY Problem definition. We model a conditional distribution P (M |S) over 3D moleculesM = (G,G) with graph G and atomic coordinates G = {ra ∈ R3} given a 3D molecular shape S. Specifically, we aim to sample molecules M ′ ∼ P (M |S) with high shape similarity (simS(M ′,MS) ≈ 1) and low graph (chemical) similarity (simG(M ′,MS) < 1) to a target molecule MS with shape S. This scheme differs from 1) typical 3D generative models that learn P (M) without modeling P (M |S), and from 2) shape-conditioned 1D/2D generators that attempt to model P (G|S), the distribution of molecular graphs that could adopt shape S, but do not actually generate specific 3D conformations. We define graph (chemical) similarity simG ∈ [0, 1] between two molecules as the Tanimoto similarity computed by RDKit with default settings (2048-bit fingerprints). We define shape similarity sim∗S ∈ [0, 1] using Gaussian descriptions of molecular shape, modeling atoms a ∈ MA and b ∈ MB from molecules MA and MB as isotropic Gaussians in R3 (Grant & Pickup, 1995; Grant et al., 1996). We compute sim∗S using (2-body) volume overlaps between atom-centered Gaussians: sim∗S(GA,GB) = VAB VAA + VBB − VAB ; VAB = ∑ a∈A,b∈B Vab; Vab ∝ exp ( −α 2 ||ra−rb||2 ) , (1) where α controls the Gaussian width. Setting α = 0.81 approximates the shape similarity function used by the ROCS program (App. A.6). sim∗S is sensitive to SE(3) transformations of molecule MA with respect to moleculeMB . Thus, we define simS(MA,MB) = max R,t sim∗S(GAR+t,GB) as the shape similarity when MA is optimally aligned to MB . We perform such alignments with ROCS. Approach. At a high level, we model P (M |S) with an encoder-decoder architecture. Given a molecule MS = (GS ,GS) with shape S, we encode S (a point cloud) into equivariant features. We then variationally encode GS into atomic features, conditioned on the shape features. We then mix these shape and atom features to pass global SE(3) {in,equi}variant latent codes to the decoder, which samples new molecules from P (M |S). We autoregressively generate molecules by factoring P (M |S) = P (M0|S)P (M1|M0, S)...P (M |Mn−1, S), where each Ml = (Gl,Gl) are partial molecules defined by a BFS traversal of a tree-representation of the molecular graph (Fig. 2). Tree-nodes denote either non-ring atoms or rigid (ring-containing) fragments, and tree-links denote acyclic (rotatable, double, or triple) bonds. We generate Ml+1 by growing the graph Gl+1 around a focus atom/fragment, and then predict Gl+1 by scoring a query rotatable bond to best fit shape S. Simplifying assumptions. (1) We ignore hydrogens and only consider heavy atoms, as is common in molecular generation. (2) We only consider molecules with fragments present in our fragment library to ensure that graph generation can be expressed as tree generation. (3) Rather than generating all coordinates, we use rigid fragments, fix bond distances, and set bond angles according to hybridization heuristics (App. A.8); this lets the model focus on scoring rotatable bonds to best fit the growing conformer to the encoded shape. (4) We seed generation with M0 (the root tree-node), restricted to be a small (3-6 atoms) substructure from MS ; hence, we only model P (M |S,M0). 3.1 ENCODER Featurization. We construct a molecular graph G using atoms as nodes and bonds as edges. We featurize each node with the atomic mass; one-hot codes of atomic number, charge, and aromaticity; and one-hot codes of the number of single, double, aromatic, and triple bonds the atom forms (including bonds to implicit hydrogens). This helps us fix bond angles during generation (App. A.8). We featurize each edge with one-hot codes of bond order. We represent a shape S as a point cloud built by sampling np points from each of nh atom-centered Gaussians with (adjustable) variance σ2p. Fragment encoder. We also featurize each node with a learned embedding fi ∈ Rdf of the atom/fragment type to which that atom belongs, making each node “fragment-aware” (similar to MoLeR). In principle, fragments could be any rigid substructure with ≥ 2 atoms. Here, we specify fragments as ring-containing substructures without acyclic single bonds (Fig. 14). We construct a library Lf of atom/fragment types by extracting the top-k (k = 100) most frequent fragments from the dataset and adding these, along with each distinct atom type, to Lf (App. A.13). We then encode each atom/fragment in Lf with a simple GNN (App. A.12) to yield the global atom/fragment embeddings: {fi = ∑ a h (a) fi , {h(a)fi } = GNNLf (Gfi) ∀fi ∈ Lf}, where h (a) fi are per-atom features. Shape encoder. Given MS with nh heavy atoms, we use VN-DGCNN (App. A.11) to encode the molecular point cloud PS ∈ R(nhnp)×3 into a set of equivariant per-point vector features X̃p ∈ R(nhnp)×q×3. We then locally mean-pool the np equivariant features per atom: X̃p = VN-DGCNN(PS); X̃ = LocalPool(X̃p), (2) where X̃ ∈ Rnh×q×3 are per-atom equivariant representations of the molecular shape. Because VN operations are SO(3)-equivariant, rotating the point cloud will rotate X̃: X̃R = LocalPool(VN-DGCNN(PSR)). Although VN operations are strictly SO(3)-equivariant, we subtract the molecule’s centroid from the atomic coordinates prior to encoding, making X̃ effectively SE(3)-equivariant. Throughout this work, we denote SO(3)-equivariant vector features with tildes. Variational graph encoder. To model P (M |S), we first use a GNN (App. A.12) to encodeGS into learned atom embeddings H = {h(a) ∀ a ∈ GS}. We condition the GNN on per-atom invariant shape features X = {x(a)} ∈ Rnh×6q , which we form by passing X̃ through a VN-Inv (App. A.11): H = GNN((H0,X);GS); X = VN-Inv(X̃), (3) where H0 ∈ Rnh×(da+df ) are the set of initial atom features concatenated with the learned fragment embeddings, H ∈ Rnh×dh , and (·, ·) denotes concatenation in the feature dimension. For each atom in MS , we then encode h (a) µ ,h (a) log σ2 = MLP(h (a)) and sample h(a)var ∼ N(h(a)µ ,h(a)σ ): Hvar = { h(a)var = h (a) µ + ϵ (a) ⊙ h(a)σ ; h(a)σ = exp( 1 2 h (a) log σ2) ∀ a ∈ GS } , (4) where ϵ(a) ∼ N(0,1) ∈ Rdh , Hvar ∈ Rnh×dh , and ⊙ denotes elementwise multiplication. Here, the second argument of N(·, ·) is the standard deviation vector of the diagonal covariance matrix. Mixing shape and variational features. The variational atom features Hvar are insensitive to rotations of S. However, we desire the decoder to construct molecules in poses that are natively aligned to S (Challenge 2). We achieve this by conditioning the decoder on an equivariant latent representation of P (M |S) that mixes both shape and chemical information. Specifically, we mix Hvar with X̃ by encoding each h(a)var ∈ Hvar into linear transformations, which are applied atom-wise to X̃. We then pass the mixed equivariant features through a separate VN-MLP (App. A.11): X̃Hvar = { VN-MLP(W(a)H X̃ (a), X̃(a)); W (a) H = Reshape(MLP(h (a) var )) ∀ a ∈ GS } , (5) where W(a)H ∈ Rq ′×q , X̃(a) ∈ Rq×3, and X̃Hvar ∈ Rnh×dz×3. This maintains equivariance since W (a) H are rotationally invariant and W (a) H (X̃ (a)R) = (W (a) H X̃ (a))R for a rotation R. Finally, we sum-pool the per-atom features in X̃Hvar into a global equivariant representation Z̃ ∈ Rdz×3. We also embed a global invariant representation z ∈ Rdz by applying a VN-Inv to X̃Hvar , concatenating the output with Hvar, passing through an MLP, and sum-pooling the resultant per-atom features: Z̃ = ∑ a X̃ (a) Hvar ; z = ∑ a MLP(x(a)Hvar ,h (a) var ); x (a) Hvar = VN-Inv(X̃(a)Hvar). (6) 3.2 DECODER Given MS , we sample new molecules M ′ ∼ P (M |S,M0) by encoding PS into equivariant shape features X̃, variationally sampling h(a)var for each atom in MS , mixing Hvar with X̃, and passing the resultant (Z̃, z) to the decoder. We seed generation with a small structure M0 (extracted from MS), and build M ′ by sequentially generating larger structures M ′l+1 in a tree-like manner (Fig. 2). Specifically, we grow new atoms/fragments around a “focus” atom/fragment in M ′l , which is popped from a BFS queue. To generate M ′l+1 from M ′ l (e.g., grow the tree from the focus), we factor P (Ml+1|Ml, S) = P (Gl+1|Ml, S)P (Gl+1|Gl+1,Ml, S). Given (Z̃, z), we sample the new graphG′l+1 by iteratively attaching (a variable)C new atoms/fragments (children tree-nodes) around the focus, yielding G′(c)l for c = 1, ..., C, where G ′(C) l = G ′ l+1 and G ′(0) l = G ′ l. We then generate coordinates G′l+1 by scoring the (rotatable) bond between the focus and its parent tree-node. New bonds from the focus to its children are left unscored in M ′l+1 until the children become “in focus”. Partial molecule encoder. Before bonding each new atom/fragment to the focus (or scoring bonds), we encode the partial molecule M ′(c−1)l with the same scheme as for MS (using a parallel encoder; Fig. 2), except we do not variationally embed H′.2 Instead, we process H′ analogously to Hvar. Further, in addition to globally pooling the per-atom embeddings to obtain Z̃′ = ∑ a X̃ ′(a) H and z′ = ∑ a x ′(a) H , we also selectively sum-pool the embeddings of the atom(s) in focus, yielding Z̃′foc = ∑ a∈focus X̃ ′(a) H and z ′ foc = ∑ a∈focus x ′(a) H . We then align the equivariant representations of M ′(c−1) l and MS by concatenating Z̃, Z̃ ′, Z̃− Z̃′, and Z̃′foc and passing these through a VN-MLP: Z̃dec = VN-MLP(Z̃, Z̃′, Z̃− Z̃′, Z̃′foc). (7) Note that Z̃dec ∈ Rq×3 is equivariant to rotations of the overall system (M ′(c−1)l ,MS). Finally, we form a global invariant feature zdec ∈ Rddec to condition graph (or coordinate) generation: zdec = (VN-Inv(Z̃dec), z, z′, z− z′, z′foc). (8) Graph generation. We factor P (Gl+1|Ml, S) into a sequence of generation steps by which we iteratively connect children atoms/fragments to the focus until the network generates a (local) stop token. Fig. 2 sketches a generation sequence by which a new atom/fragment is attached to the focus, yielding G′(c)l from G ′(c−1) l . Given zdec, the model first predicts whether to stop (local) generation via p∅ = sigmoid(MLP∅(zdec)) ∈ (0, 1). If p∅ ≥ τ∅ (a threshold, App. A.16), we stop and proceed to bond scoring. Otherwise, we select which atom afoc on the focus (if multiple) to grow from: pfocus = softmax({MLPfocus(zdec,x′(a)H ) ∀ a ∈ focus}). (9) The decoder then predicts which atom/fragment fnext ∈ Lf to connect to the focus next: pnext = softmax({MLPnext(zdec,x′(afoc)H , ffi) ∀ fi ∈ Lf}). (10) 2We have dropped the (c) notation for clarity. However, each Zdec is specific to each (M ′(c−1) l ,MS) system. If the selected fnext is a fragment, we predict the attachment site asite on the fragment Gfnext : psite = softmax({MLPsite(zdec,x′(afoc)H , fnext,h (a) fnext ) ∀ a ∈ Gfnext}), (11) where h(a)fnext are the encoded atom features for Gfnext . Lastly, we predict the bond order (1 ◦, 2◦, 3◦) via pbond = softmax(MLPbond(zdec,x ′(afoc) H , fnext,h (asite) fnext )). We repeat this sequence of steps until p∅ ≥ τ∅, yielding Gl+1. At each step, we greedily select the action after masking actions that violate known chemical valence rules. After each sequence, we bond a new atom or fragment to the focus, giving G′(c)l . If an atom, the atom’s position relative to the focus is fixed by heuristic bonding geometries (App. A.8). If a fragment, the position of the attachment site is fixed, but the dihedral of the new bond is yet unknown. Thus, in subsequent generation steps we only encode the attachment site and mask the remaining atoms in the new fragment until that fragment is “in focus” (Fig. 2). This means that prior to bond scoring, the rotation angle of the focus is random. To account for this when training (with teacher forcing), we randomize the focal dihedral when encoding eachM ′(c−1)l . Scoring rotatable bonds. After sampling G′l+1 ∼ P (Gl+1|M ′l , S), we generate G′l+1 by scoring the rotation angle ψ′l+1 of the bond connecting the focus to its parent node in the generation tree (Fig. 2). Since we ultimately seek to maximize simS(M ′,MS), we exploit the fact that our model generates shape-aligned structures to predict max ψ′l+2,ψ ′ l+3,... sim∗S(G ′(ψfoc),GS) for various query di- hedrals ψ′l+1 = ψfoc of the focus rotatable bond in a supervised regression setting. Intuitively, the scorer is trained to predict how the choice of ψfoc affects the maximum possible shape similarity of the final molecule M ′ to the target MS under an optimal policy. App. A.2 details how regression targets are computed. During generation, we sweep over each query ψfoc ∈ [−π, π), encode each resultant structure M ′(ψfoc) l+1 into z (ψfoc) dec, scorer 3, and select the ψfoc that maximizes the predicted score: ψ′l+1 = argmax ψfoc sigmoid(MLPscorer(z (ψfoc) dec, scorer)). (12) At generation time, we also score chirality by enumerating stereoisomers Gχfoc ∈ G′foc of the focus and selecting the (Gχfoc, ψfoc) that maximizes Eq. 12 (App. A.2). Training. We supervise each step of graph generation with a multi-component loss function: Lgraph-gen = L∅+Lfocus+Lnext+Lsite+Lbond+βKLLKL+βnext-shapeLnext-shape+β∅-shapeL∅-shape. (13) L∅, Lfocus, Lnext, and Lbond are standard cross-entropy losses. Lsite = − log( ∑ a p (a) site I[ca > 0]) is a modified cross-entropy loss that accounts for symmetric attachment sites in the fragmentsGfi ∈ Lf , where p(a)site are the predicted attachment-site probabilities and ca are multi-hot class probabilities. LKL is the KL-divergence between the learned N (hµ,hσ) and the prior N (0,1). We also employ two auxiliary losses Lnext-shape and L∅-shape in order to 1) help the generator distinguish between incorrect shape-similar (near-miss) vs. shape-dissimilar fragments, and 2) encourage the generator to generate structures that fill the entire target shape (App. A.10). We train the rotatable bond scorer separately from the generator with an MSE regression loss. See App. A.15 for training details. 4 EXPERIMENTS Dataset. We train SQUID with drug-like molecules (up to nh = 27) from MOSES (Polykovskiy et al., 2020) using their train/test sets. Lf includes 100 fragments extracted from the dataset and 24 atom types. We remove molecules that contain excluded fragments. For remaining molecules, we generate a 3D conformer with RDKit, set acyclic bond distances to their empirical means, and fix acyclic bond angles using heuristic rules. While this 3D manipulation neglects distorted bonding geometries in real molecules, the global shapes are marginally impacted, and we may recover refined geometries without seriously altering the shape (App. A.8). The final dataset contains 1.3M 3D molecules, partitioned into 80/20 train/validation splits. The test set contains 147K 3D molecules. 3We train the scorer independently from the graph generator, but with a parallel architecture. Hence, zdec ̸= zdec, scorer. The main architectural difference between the two models (graph generator and scorer) is that we do not variationally encode Hscorer into Hvar,scorer, as we find it does not impact empirical performance. In the following experiments, we only consider molecules MS for which we can extract a small (3-6 atoms) 3D substructure M0 containing a terminal atom, which we use to seed generation. In principle,M0 could include larger structures fromMS , e.g., for scaffold-constrained tasks. Here, we use the smallest substructures to ensure that the shape-conditioned generation tasks are not trivial. Shape-conditioned generation of chemically diverse molecules. “Scaffold-hopping”—designing molecules with high 3D shape similarity but novel 2D graph topology compared to known inhibitors—is pursued in LBDD to develop chemical lead series, optimize drug activity, or evade intellectual property restrictions (Hu et al., 2017). We imitate this task by evaluating SQUID’s ability to generate molecules M ′ with high simS(M ′,MS) but low simG(M ′,MS). Specifically, for 1000 molecules MS with target shapes S in the test set, we use SQUID to generate 50 molecules per MS . To generate chemically diverse species, we linearly interpolate between the posterior N(hµ,hσ) and the prior N(0,1), sampling each hvar ∼ N((1− λ)hµ, (1− λ)hσ + λ1) using either λ = 0.3 or λ = 1.0 (prior). We then filter the generated molecules to have simG(M ′,MS) < 0.7, or < 0.3 to only evaluate molecules with substantial chemical differences compared to MS . Of the filtered molecules, we randomly choose Nmax samples and select the sample with highest simS(M ′,MS). Figure 3A plots distributions of simS(M ′,MS) between the selected molecules and their respective target shapes, using different sampling (Nmax = 1, 20) and filtering (simG(M ′,MS) < 0.7, 0.3) schemes. We compare against analogously sampling random 3D molecules from the training set. Overall, SQUID generates diverse 3D molecules that are quantitatively enriched in shape similarity compared to molecules sampled from the dataset, particularly for Nmax = 20. Qualitatively, the molecules generated by SQUID have significantly more atoms which directly overlap with the atoms of MS , even in cases where the computed shape similarity is comparable between SQUIDgenerated molecules and molecules sampled from the dataset (Fig. 3C). We quantitatively explore this observation in App. A.7. We also find that using λ = 0.3 yields greater simS(M ′,MS) than λ = 1.0, in part because using λ = 0.3 yields less chemically diverse molecules (Fig. 3B; Challenge 3). Even so, sampling Nmax = 20 molecules from the prior with simG(M ′,MS) < 0.3 still yields more shape-similar molecules than sampling Nmax = 500 molecules from the dataset. We emphasize that 99% of samples from the prior are novel, 95% are unique, and 100% are chemically valid (App. A.4). Moreover, 87% of generated structures do not have any steric clashes (App. A.4), indicating that SQUID generates realistic 3D geometries of the flexible drug-like molecules. Ablating equivariance. SQUID’s success in 3D shape-conditioned molecular generation is partly attributable to SQUID aligning the generated structures to the target shape in equivariant feature space (Eq. 7), which enables SQUID to generate 3D structures that fit the target shape without having to implicitly learn how to align two structures in R3 (Challenge 2). We explicitly validate this design choice by setting Z̃ = 0 in Eq. 7, which prevents the decoder from accessing the 3D orientation of MS during training/generation. As expected, ablating SQUID’s equivariance reduces the enrichment in shape similarity (relative to the dataset baseline) by as much as 33% (App. A.9). Shape-constrained molecular optimization. Scaffold-hopping is often goal-directed; e.g., aiming to reduce toxicity or improve bioactivity of a hit compound without altering its 3D shape. We mimic this shape-constrained MO setting by applying SQUID to optimize objectives from GaucaMol (Brown et al., 2019) while preserving high shape similarity (simS(M,MS) ≥ 0.85) to various “hit” 3D molecules MS from the test set. This task considerably differs from typical MO tasks, which optimize objectives without constraining 3D shape and without generating 3D structures. To adapt SQUID to shape-constrained MO, we implement a genetic algorithm (App. A.5) that iteratively mutates the variational atom embeddings Hvar of encoded seed molecules (“hits”) MS in order to generate 3D molecules M∗ with improved objective scores, but which still fit the shape of MS . Table 1 reports the optimized top-1 scores across 6 objectives and 8 seed molecules MS (per objective, sampled from the test set), constrained such that simS(M∗,MS) ≥ 0.85. We compare against the score ofMS , as well as the (shape-constrained) top-1 score obtained by virtual screening (VS) our training dataset (>1M 3D molecules). Of the 8 seeds MS per objective, 3 were selected from top-scoring molecules to serve as hypothetical “hits”, 3 were selected from top-scoring large molecules (≥ 26 heavy atoms), and 2 were randomly selected from all large molecules. In 40/48 tasks, SQUID improves the objective score of the seed MS while maintaining simS(M∗,MS) ≥ 0.85. Qualitatively, SQUID optimizes the objectives through chemical alterations such as adding/deleting individual atoms, switching bonding patterns, or replacing entire substructures – all while generating 3D structures that fit the target shape (App. A.5). In 29/40 of successful cases, SQUID (limited to 31K samples) surpasses the baseline of virtual screening 1M molecules, demonstrating the ability to efficiently explore new shape-constrained chemical space. 5 CONCLUSION We designed a novel 3D generative model, SQUID, to enable shape-conditioned exploration of chemically diverse molecular space. SQUID generates realistic 3D geometries of larger molecules that are chemically valid, and uniquely exploits equivariant operations to construct conformations that fit a target 3D shape. We envision our model, alongside future work, will advance creative shape-based drug design tasks such as 3D scaffold hopping and shape-constrained 3D ligand design. REPRODUCIBILITY STATEMENT We have taken care to facilitate the reproduciblility of this work by detailing the precise architecture of SQUID throughout the main text; we also provide extensive details on training protocols, model parameters, and further evaluations in the Appendices. Our source code can be found at https://github.com/keiradams/SQUID. Beyond the model implementation, our code includes links to access our datasets, as well as scripts to process the training dataset, train the model, and evaluate our trained models across the shape-conditioned generation and shape-constrained optimization tasks described in this paper. ETHICS STATEMENT Advancing the shape-conditioned 3D generative modeling of drug-like molecules has the potential to accelerate pharmaceutical drug design, showing particular promise for drug discovery campaigns involving scaffold hopping, hit expansion, or the discovery of novel ligand analogues. However, such advancements could also be exploited for nefarious pharmaceutical research and harmful biological applications. ACKNOWLEDGMENTS This research was supported by the Office of Naval Research under grant number N00014-21-12195. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 2141064. The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the research results reported within this poster. The authors thank Rocı́o Mercado, Sam Goldman, Wenhao Gao, and Lagnajit Pattanaik for providing helpful suggestions regarding the content and presentation of this paper. A APPENDIX CONTENTS A Appendix 15 A.1 Overview of definitions, terms, and notations . . . . . . . . . . . . . . . . . . . . 16 A.2 Scoring rotatable bonds and stereochemistry . . . . . . . . . . . . . . . . . . . . . 18 A.3 Random examples of generated 3D molecules . . . . . . . . . . . . . . . . . . . . 20 A.4 Generation statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 A.5 Shape-constrained molecular optimization . . . . . . . . . . . . . . . . . . . . . . 24 A.5.1 Genetic algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 A.5.2 Visualization of optimized molecules . . . . . . . . . . . . . . . . . . . . 24 A.6 Comparing simS to ROCS scoring function . . . . . . . . . . . . . . . . . . . . . 27 A.7 Exploring different values of α in simS . . . . . . . . . . . . . . . . . . . . . . . 28 A.8 Heuristic bonding geometries and their impact on global shape . . . . . . . . . . . 29 A.9 Ablating equivariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 A.10 Auxiliary training losses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 A.11 Overview of Vector Neurons (VN) operations . . . . . . . . . . . . . . . . . . . . 33 A.12 Graph neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 A.13 Fragment library. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 A.14 Model parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 A.15 Additional training details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 A.16 Additional generation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 A.17 Relaxation of generated geometries . . . . . . . . . . . . . . . . . . . . . . . . . . 43 A.18 Comparison to LigDream (Skalic et al., 2019) . . . . . . . . . . . . . . . . . . . . 45 A.1 OVERVIEW OF DEFINITIONS, TERMS, AND NOTATIONS A.2 SCORING ROTATABLE BONDS AND STEREOCHEMISTRY Recall that our goal is to train the scorer to predict max ψ′l+2,ψ ′ l+3,... sim∗S(G ′(ψfoc),GS) for various query dihedrals ψ′l+1 = ψfoc. That is, we wish to predict the maximum possible shape similarity of the final molecule M ′ to MS when fixing ψ′l+1 = ψfoc and optimally rotating all the yet-to-be-scored (or generated) rotatable bond dihedrals ψ′l+2, ψ ′ l+3, ... so as to maximize sim ∗ S(G ′(ψfoc),GS). Training. We train the scorer independently from the graph generator (with a parallel architecture) using a mean squared error loss between the predicted scores ŝ(ψfoc)dec, scorer = sigmoid(MLP(z (ψfoc) dec, scorer)) and the regression targets s(ψfoc) for Ns different query dihedrals ψfoc ∈ [−π, π): Lscorer = 1 Ns Ns∑ i=1 (s(ψ (i) foc ) − ŝ(ψ (i) foc ) dec, scorer) 2 (14) Computing regression targets. When training with teacher forcing (M ′l = MSl , G′ = GS), we compute regression targets sψfoc ≈ max ψl+2,ψl+3,... sim∗S(G ′(ψfoc),GS) by setting the focal dihedral ψl+1 = ψfoc, sampling Nψ conformations of the “future” graph GTfoc induced by the subtree Tfoc whose root (sub)tree-node is the focus, and computing sψfoc = max i=0,...,Nψ sim∗S(G (i) Tfoc ,GSTfoc ;α = 2.0). Since we fix bonding geometries, we need only sample Nψ sets of dihedrals of the rotatable bonds in GSTfoc to sample Nψ conformers, making this conformer enumeration very fast. Note that rather than using α = 0.81 in these regression targets, we use α = 2.0 to make the scorer more sensitive to shape differences (App. A.7). When computing regression targets, we use Nψ < 1800 and select 36 (evenly spaced)ψfocus ∈ [−π, π) per rotatable bond. Figure 4 visualizes how regression targets are computed. App. A.15 contains further training specifics. Scoring stereochemistry. At generation time, we also enumerate all possible stereoisomers of the focus (except cis/trans bonds) and score each stereoisomer separately, ultimately selecting the (stereoisomer, ψfoc) pair that maximizes the predicted score. Figure 5 illustrates how we enumerate stereoisomers. Note that although we use the learned scoring function to score stereoisomerism at generation time, we do not explicitly train the scorer to score different stereoisomers. Masking severe steric clashes. At generation time, we do not score any query dihedral ψfoc that causes a severe steric clash (< 1Å) with the existing partially generated structure (unless all query dihedrals cause a severe clash). A.3 RANDOM EXAMPLES OF GENERATED 3D MOLECULES Figures 6 and 7 show additional random examples of molecules generated by SQUID when sampling Nmax = 1, 20 molecules with simG(M ′,MS) < 0.7 from the prior (λ = 1.0) or λ = 0.3 and selecting the sample with the highest simS(M ′,MS). Note that the visualized poses of the generated conformers are those which are directly generated by SQUID; the generated conformers have not been explicitly aligned to MS (e.g., using ROCS). Even so, the conformers are (for the most part) aligned toMS since SQUID’s equivariance enables the model to generate natively aligned structures. It is apparent in these examples that using larger Nmax yields molecules with significantly improved shape similarity to MS , both qualitatively and quantitatively. This is in part caused by: 1) stochasticity in the variationally sampled atom embeddings Hvar; 2) stochasticity in the input molecular point clouds, which are sampled from atom-centered isotropic Gaussians in R3; 3) sampling sets of variational atom embeddings that may not be entirely self-consistent (e.g., for instance, if we sample only 1 atom embedding that implicitly encodes a ring structure); and 4) the choice of τ∅, the threshold for stopping local generation. While a small τ∅ (we use τ∅ = 0.01) helps prevent the model from adding too many atoms or fragments around a single focus, a small τ∅ can also lead to early (local) stoppage, yielding molecules that do not completely fill the target shape. By sampling more molecules (using largerNmax), we have more chances to avoid these adverse random effects. Further work will attempt to improve the robustness of the encoding scheme and generation procedure in order to increase SQUID’s overall sample efficiency. A.4 GENERATION STATISTICS Table 4 reports the percentage of molecules that are chemically valid, novel, and unique when sampling 50 molecules from the prior (λ = 1.0) for 1000 encoded molecules MS (e.g., target shapes) from the test set, yielding a total of 50K generated molecules. We define chemical validity as passing RDKit sanitization. Since we directly generate the molecular graph and mask actions which violate chemical valency, 100% of generated molecules are valid. We define novelty as the percentage of generated molecules whose molecular graphs are not present in the training data. We define uniqueness as the percentage of generated molecular graphs (of the 50K total) that are only generated once. For novelty and uniqueness calculations, we consider different stereoisomers to have the same molecular graph. We also report the percentage of generated 3D structures that have an apparent steric clash, defined to be a non-bonded interatomic distance below 2Å. When sampling from the prior (λ = 1.0), the average internal chemical similarity of the generated molecules is 0.26± 0.04. When sampling with λ = 0.3, the average internal chemical similarity is 0.32 ± 0.07. We define internal chemical similarity to be the average pairwise chemical similarity (Tanimoto fingerprint similarity) between molecules that are generated for the same target shape. Table 5 reports the graph reconstruction accuracy when sampling 3D molecules from the posterior (λ = 0.0), for 1000 target molecules MS from the test set. We report the top-k graph reconstruction accuracy (ignoring stereochemical differences) when sampling k = 1 molecule per encoded MS , and when sampling k = 20 molecules per encoded MS . Since we have intentionally trained SQUID inside a shape-conditioned variational autoencoder framework in order to generate chemically diverse molecules with similar 3D shapes, the significance of graph reconstruction accuracy is debatable in our setting. However, it is worth noting that the top-1 reconstruction accuracy is 16.3%, while the top-20 reconstruction accuracy is much higher (57.2%). This large difference is likely attributable to both stochasticity in the variational atom embeddings and stochasticity in the input 3D point clouds. A.5 SHAPE-CONSTRAINED MOLECULAR OPTIMIZATION A.5.1 GENETIC ALGORITHM We adapt SQUID to shape-constrained molecular optimization by implementing a genetic algorithm on the variational atom embeddings Hvar. Algorithm 1 details the exact optimization procedure. In summary, given the seed molecule MS with a target 3D shape and an initial substructure M0 (which is contained by all generated molecules for a given MS), we first generate an initial population of generated molecules M ′ by repeatedly sampling Hvar for various interpolation factors λ, mixing these Hvar with the encoded shape features of MS , and decoding new 3D molecules. We only add a generated molecule to the population if simS(M ′,MS) ≥ τS (we use τS = 0.75), so that the GA does not overly explore regions of chemical space that have no chance of satisfying the ultimate constraint simS(M ′,MS) ≥ 0.85. After generating the initial population, we iteratively 1) select the top-scoring samples in the population, 2) cross the top-scoring Hvar in crossover events, 3) mutate the top and crossed Hvar via adding random noise, and 4) generate new molecules M ′ for each mutated Hvar. The final optimized molecule M∗ is the top-scoring generated molecule that satisfies the shape-similarity constraint simS(M ′,MS) ≥ 0.85. A.5.2 VISUALIZATION OF OPTIMIZED MOLECULES Figure 8 visualizes the structures of the SQUID-optimized molecules M∗ and their respective seed molecules MS (e.g., the starting “hit” molecules with target shapes) for each of the optimization tasks which led to an improvement in the objective score. We also overlay the generated 3D conformations of M∗ on those of MS , and report the objective scores for each M∗ and MS . Algorithm 1 Genetic algorithm for shape-constrained optimization with SQUID Given: MS with nH heavy atoms,M0, objective oracle O Params: τS , τG, Ne, NT , Nc ▷ Defaults: τS = 0.75, τG = 0.95, Ne = 20, NT = 20, Nc = 10 Hµ,Hσ = Encode(MS) ▷ Encode target molecule Initialize population P = {(MS ,Hµ)} for λ ∈ [0.0, 0.2, 0.4, 0.6, 0.8, 1.0] do ▷ Create initial population of (M ′,Hvar) for i = 1, ..., 100 do Sample noise ϵ ∈ R(nH×dh) ∼ N(0,1) Hvar = (1− λ)Hµ + ϵ⊙ ((1− λ)Hσ + λ1) ▷ Mutate variational atom embeddings z, Z̃ = Encode(MS ;Hvar) ▷ Mix mutated chemical and shape information M ′ = Decode(M0, z, Z̃) ▷ Generate mutated molecule M ′ Compute simS(M ′,MS) if simS(M ′,MS) >= τS then ▷ Add M ′ to population only if simS(M ′,MS) is high Add (M ′,Hvar) to P end if end for end for for e = 1, ..., Ne do ▷ For each evolution Construct Psorted by sorting P by O(M) ▷ Sort population by objective score, high to low. Initialize TM = {}, THvar = {} for (M,Hvar) ∈ Psorted do ▷ Collect top-NT scoring (M ′,Hvar) if (simG(M,MT ) < τG ∀ MT ∈ TM ) and (|T | < NT ) then Add M to TM Add Hvar to THvar end if end for Initialize TC = {} for c = 1, ..., Nc do ▷ Add crossovers to set of top-scoring Hvar Sample Hi ∈ THvar , Hj ̸=i ∈ THvar Hc = CROSS(Hi,Hj) ▷ Cross by randomly swapping half of the atom embeddings Add Hc to TC end for THvar = THvar ∪ TC for Hvar ∈ THvar do ▷ Adding to population for λ ∈ [0.0, 0.2, 0.4, 0.6, 0.8, 1.0] do for i = 1, ..., 10 do Sample noise ϵ ∈ R(nH×dh) ∼ N(0,1) Hvar = (1− λ)Hvar + ϵ ▷ Mutate variational atom embeddings z, Z̃ = Encode(MS ;Hvar) ▷ Mix mutated chemical and shape information M ′ = Decode(M0, z, Z̃) ▷ Generate mutated molecule M ′ Compute simS(M ′,MS) if simS(M ′,MS) >= τS then ▷ Add M ′ to P only if simS(M ′,MS) is high Add (M ′,Hvar) to P end if end for end for end for end for return M∗ = argmax M ′∈P O(M ′) subject to simS(M ′,MS) >= 0.85 A.6 COMPARING SIMS TO ROCS SCORING FUNCTION Our shape similarity function described in Equation 1 closely approximates the shape (only) scoring function employed by ROCS, when α = 0.81. Figure 9 demonstrates the near-perfect correlation between our computed shape scores and those computed by ROCS for 50,000 shape comparisons, with a mean absolute error of 0.0016. Note that Equation 1 computes non-aligned shape similarity. We still employ ROCS to align the generated molecules M ′ to the target molecule MS before computing their (aligned) shape similarity in our experiments. However, we do not require explicit alignment when training SQUID; we do not use the commercial ROCS program during training. A.7 EXPLORING DIFFERENT VALUES OF α IN SIMS Our analysis of shape similarity thus far has used Equation 1 with α = 0.81 in order to recapitulate the shape similarity function used by ROCS, which is widely used in drug discovery. However, compared to randomly sampled molecules in the dataset, the molecules generated by SQUID qualitatively appear to do a significantly better job at fitting the target shape S on an atom-by-atom basis, even if the computed shape similarities (with α = 0.81) are comparable (see examples in Figure 3). We quantify this observation by increasing the value of α when computing simS(M ′,MS ;α) for generated molecules M ′, as α is inversely related to the width of the isotropic 3D Gaussians used in the volume overlap calculations in Equation 1. Intuitively, increasing α will greater penalize simS if the atoms of M ′ and MS do not perfectly align. Figure 10 plots the mean simS(M,MS ;α) for the most shape-similar molecule M of Nmax sampled molecules M ′ for increasing values of α. Averages are calculated over 1000 target molecules MS from the test set, and we only consider generated molecules for which simG(M ′,MS) < 0.7. Crucially, the gap between the mean simS(M,MS ;α) obtained by generating molecules with SQUID vs. randomly sampling molecules from the dataset significantly widens with increasing α. This effect is especially apparent when using SQUID with λ = 0.3 and Nmax = 20, although can be observed with other generation strategies as well. Hence, SQUID does a much better job at generating (still chemically diverse) molecules that have significant atom-to-atom overlap with MS . A.8 HEURISTIC BONDING GEOMETRIES AND THEIR IMPACT ON GLOBAL SHAPE In all molecules (dataset and generated) considered in this work, we fix acyclic bond distances to their empirical averages and set acyclic bond angles to heuristic values based on hybridization rules in order to reduce the degrees of freedom in 3D coordinate generation. Here, we describe how we fix these bonding geometries and explore whether this local 3D structure manipulation significantly alters the global molecular shape. Fixing bonding geometries. We fix acyclic bond distances by computing the mean bond distance between pairs of atom types across all the RDKit-generated conformers in our training set. After collecting these empirical mean values, we manually set each acyclic bond distance to its respective mean value for each conformer in our datasets. We set acyclic bond angles using simple hybridization rules. Specifically, sp3-hybridized atoms will have bond angles of 109.5◦, sp2-hybridized atoms will have bond angles of 120◦, and sp-hybridized atoms will have bond angles of 180◦. We manually fix the acyclic bond angles to these heuristic values for all conformers in our datasets. We use RDKit to determine the hybridization states of each atom. During generation, occasionally the hybridization of certain atoms (N, O) may change once they are bonded to new neighbors. For instance, an sp3 nitrogen can become sp2 once bonded to an aromatic ring. We adjust bond angles on-the-fly in these edge cases. Impact on global shape. Figure 11 plots the histogram of simS(Mfixed,Mrelaxed) for 1000 test set conformers Mfixed whose bonding geometries have been fixed, and the original RDKitgenerated conformers Mrelaxed with relaxed (true) bonding geometries. In the vast majority of cases, fixing the bonding geometries negligibly impacts the global shape of the 3D molecule (simS(Mfixed,Mrelaxed) ≈ 1). This is because the main factor influencing global molecular shape is rotatable bonds (e.g., flexible dihedrals), which are not altered by fixing bond distances and angles. Recovering refined bonding geometries. Even though fixing bond distances and angles only marginally impacts molecular shape, we still may wish to recover refined bonding geometries of the generated 3D molecules without altering the generated 3D shape. We can accomplish this (to a first approximation) for generated molecules by creating a geometrically relaxed conformation of the generated molecular graph with RDKit, and then manually setting the dihedrals of the rotatable bonds in the relaxed conformer to match the corresponding dihedrals in the generated conformers. Importantly, if we perform this relaxation procedure for both the dataset molecules and the SQUID-generated molecules, the (relaxed) generated molecules still have significantly enriched shape-similarity to the (relaxed) target shape compared to (relaxed) random molecules from the dataset (Fig. 12). A.9 ABLATING EQUIVARIANCE SQUID aligns the equivariant representations of the encoded target shape and the partially generated structures in order to generate 3D conformations that natively fit the target shape, without having to implicitly learn SE(3)-alignments (Challenge 2). We achieve this in Equation 7, where we mix the equivariant representations of MS and the partially generated structure M ′(c−1) l . To empirically motivate this design choice, we ablate the equivariant alignment by setting Z̃ = 0 in Eq. 7. We denote this ablated model as SQUID-NoEqui. Note that because we still pass the unablated invariant features z to the decoder (Eq. 8), SQUID-NoEqui is still conditioned on the shape of MS — the model simply no longer has access to any explicit information about the relative spatial orientation of M ′(c−1)l to MS (and thus must learn this spatial relationship from scratch). As expected, ablating SQUID’s equivariance significantly reduces SQUID’s ability to generate chemically diverse molecules that fit the target shape. Figure 13 plots the distributions of simS(M ′,MS) for the best of Nmax generated molecules with simG(M ′,MS) < 0.7 or 0.3 when using SQUID or SQUID-NoEqui. Crucially, the mean shape similarity when sampling with (λ = 1.0, Nmax = 20, simG(M ′,MS) < 0.7) decreases from 0.828 (SQUID) to 0.805 (SQUID-NoEqui). When sampling with (λ = 0.3, Nmax = 20, simG(M ′,MS) < 0.7), the mean shape similarity also decreases from 0.879 (SQUID) to 0.839 (SQUID-NoEqui). Relative to the mean shape similarity of 0.758 achieved by sampling random molecules from the dataset (Nmax = 20, simG(M ′,MS) < 0.7), this corresponds to a substantial 33% reduction in the shape-enrichment of SQUID-generated molecules. Interestingly, sampling (λ = 1.0, Nmax = 20, simG(M ′,MS) < 0.7) with SQUID-NoEqui still yields shape-enriched molecules compared to analogously sampling random molecules from the dataset (mean shape similarity of 0.805 vs. 0.758). This is because even without the equivariant feature alignment, SQUID-NoEqui still conditions molecular generation on the (invariant) encoding of the target shape S, and hence biases generation towards molecules which better fit the target shape (after alignment with ROCS). A.10 AUXILIARY TRAINING LOSSES We employ two auxiliary losses when training the graph generator in order to encourage the generated graphs to better fit the encoded target shape. The first auxiliary loss penalizes the graph generator if it adds an incorrect atom/fragment to the focus that is of significantly different size than the correct (ground truth) atom/fragment. We first compute a matrix ∆Vf ∈ R (|Lf |×|Lf |) + containing the (pairwise) volume difference between all atoms/fragments in the library Lf ∆V (i,j) f = |vfi − vfj | (15) where vfi is the volume of atom/fragment fi ∈ Lf (computed with RDKit). We then compute the auxiliary loss Lnext-shape as: Lnext-shape = 1 |Lf | (pnext ·∆V(g)f ) (16) where g is the index of the correct (ground truth) next atom/fragment fnext, true, ∆V (g) f is the gth row of ∆Vf , and pnext are the predicted probabilities over the next atom/fragment types to be connected to the focus (see Eq. 10). The second auxiliary loss penalizes the graph generator if it prematurely stops (local) generation, with larger penalties if the premature stop would result in larger portions of the (ground truth) graph not being generated. When predicting (local) stop tokens during graph generation (with teacher forcing), we compute the number of atoms in the subgraph induced by the subtree whose root treenode is the next atom/fragment to be added to the focus (in the current generation sequence). We then multiply the predicted probability for the local stop token by this number of “future” atoms that would not be generated if a premature stop token were generated. Hence, if the correct action is to indeed stop generation around the focus, the penalty will be zero. However, if the correct action is to add a large fragment to the current focus but the generator predicts a stop token, the penalty will be large. Formally, we compute: L∅-shape = p∅|GSTnext | if p∅, true = 0 otherwise 0 (17) where p∅, true is the ground truth action for local stopping (p∅, true = 0 indicates that the correct action is to not stop local generation), and GSTnext is the subgraph induced by the subtree whose root node is the next atom/fragment (to be generated) in the ground-truth molecular graph. A.11 OVERVIEW OF VECTOR NEURONS (VN) OPERATIONS In this work, we use Deng et al. (2021)’s VN-DGCNN to encode molecular point clouds into equivariant shape features. We also employ their general VN operations (VN-MLP, VN-Inv) during shape and chemical feature mixing. We refer readers to Deng et al. (2021) for a detailed description of these equivariant operations and models. Here, we briefly summarize some relevant VN-operations for the reader’s convenience. VN-MLP. Vector neurons (VN) lift scalar neuron features to vector features in R3. Hence, instead of having features x ∈ Rq , we have vector features X̃ ∈ Rq×3. While linear transformations are naturally equivariant to global rotations R since W(X̃R) = (WX̃)R for some rotation matrix R ∈ R3×3, Deng et al. (2021) construct a set of non-linear equivariant operations f̃ such that f̃(X̃R) = f̃(X̃)R, thereby enabling natively equivariant network design. VN-MLPs combine linear transformations with equivariant activations. In this work, we use VNLeakyReLU, which Deng et al. (2021) define as: VN-LeakyReLU(X̃;α) = αX̃+ (1− α)VN-ReLU(X̃) (18) where VN-ReLU(X̃) = x̃, if x̃ · k̃ ||k̃|| ≥ 0 x̃− (x̃ · k̃||k̃|| ) k̃ ||k̃|| otherwise ∀ x̃ ∈ X̃ (19) where k̃ = UX̃ for a learnable weight matrix U ∈ R1×q , and where x̃ ∈ R3. By composing series of linear transformations and equivariant activations, VN-MLPs map X̃ ∈ Rq×3 to X̃′ ∈ Rq′×3 such that X̃′R = VN-MLP(X̃R). VN-Inv. Deng et al. (2021) also define learnable operations that map equivariant features X̃ ∈ Rq×3 to invariant features x ∈ R3q . In general, VN-Inv constructs invariant features by multiplying equivariant features X̃ with other equivariant features Ỹ ∈ R3×3 : X̂ = X̃Ỹ⊤ (20) The invariant features X̂ ∈ Rq×3 can then be reshaped into standard invariant features x ∈ R3q . In our work, we slightly modify Deng et al. (2021)’s original formulation. Given a set of equivariant features X̃ = {X̃(i)} ∈ Rn×q×3, we define a VN-Inv as: VN-Inv(X̃) = X (21) where X = {x(i)} ∈ Rn×6q and: x(i) = Flatten(Ṽ(i)T̃⊤i ) (22) Ṽ(i) = (X̃(i), ∑ i X̃(i)) if n > 1 otherwise X̃(i) (23) T̃i = VN-MLP(Ṽ(i)) (24) where T̃i ∈ R3×3, and Ṽ(i) ∈ R2q×3 (n > 1) or Ṽ(i) ∈ Rq×3 (n = 1). VN-DGCNN. Deng et al. (2021) introduce VN-DGCNN as an SO(3)-equivariant version of the Dynamic Graph Convolutional Neural Network (Wang et al., 2019). Given a point cloud P ∈ Rn×3, VN-DGCNN uses (dynamic) equivariant edge convolutions to update equivariant per-point features: Ẽ(t+1)nm = VN-LeakyReLU (t)(Θ(t)(X̃(t)m − X̃(t)n ) +Φ(t)X̃(t)n ) (25) X̃(t+1)n = ∑ m∈KNNf (n) Ẽ(t+1)nm (26) where KNNf (n) are the k-nearest neighbors of point m in feature space, Φ(t) and Θ(t) are weight matrices, and X̃(t)n ∈ Rq×3 are the per-point equivariant features. A.12 GRAPH NEURAL NETWORKS In this work, we employ graph neural networks (GNNs) to encode: • each atom/fragment in the library Lf • the target molecule MS • each partial molecular structure M ′(c)l during sequential graph generation • the query structures M ′(ψfoc)l+1 when scoring rotatable bonds Our GNNs are loosely based upon a simple version of the EGNN (Satorras et al., 2022b). Given a molecular graph G with atoms as nodes and bonds as edges, we use graph convolutional layers defined by the following: mt+1ij = ϕ t m ( hti,h t j , ||ri − rj ||2,mtij ) (27) mt+1i = ∑ j∈N(i) mij (28) h (t=1) i = ϕ (0) h (h 0 i ,m (t=1) i ) (29) ht+1i = ϕ t h(h t i,m t+1 i ) + h t i (t > 0) (30) where hti are the learned atom embeddings at each GNN-layer, m t ij are learned (directed) messages, ri ∈ R3 are the coordinates of atom i, N(i) is the set of 1-hop bonded neighbors of atom i, and each ϕtm, ϕ t h are MLPs. Note that h 0 i are the initial atom features, and m 0 ij are the initial bond features for the bond between atoms i and j. In general, mtij ̸= mtji for t > 0, but here m0ij = m0ji. Note that since we only aggregate messages from directly bonded neighbors, ||ri − rj || only encodes bond distances, and does not encode any information about specific 3D conformations. Hence, our GNNs effectively only encode 2D chemical identity, as opposed to 3D shape. A.13 FRAGMENT LIBRARY. Our atom/fragment library Lf includes 100 distinct fragments (Fig. 14) and 24 unique atom types. The 100 fragments were selected based on the top-100 most frequently occurring fragments in our training set. In this work, we specify fragments as ring-containing substructures that do not contain any acyclic single bonds. However, in principle fragments could be any (valid) chemical substructure. Note that we only use 1 (geometrically optimized) conformation per fragment, which is assumed to be rigid. Hence, in its current implementation, SQUID does not consider different ring conformations (e.g., boat vs. chair conformations of cyclohexane). A.14 MODEL PARAMETERS Parameter sharing. For both the graph generator and the rotatable bond score, the (variational) molecule encoder (in the Encoder, Fig. 2) and the partial molecule encoder (in the Decoder, Fig. 2) share the same fragment encoder (Lf -GNN), which is trained end-to-end with the rest of the model. Apart from Lf -GNN, these encoders do not share any learnable parameters, despite having parallel architectures. The graph generator and the rotatable bond scorer are completely independent, and are trained separately. Hyperparameters. Tables 6 and 7 tabulate the set of hyperparameters used for SQUID across all the experiments conducted in this paper. Table 8 summarizes training and generation parameters, but we refer the reader to App. A.15 and A.16 for more detailed discussion of training and generation protocols. Because of the large hyperparameter search space and long training times, we did not perform extensive hyperparameter optimizations. We manually tuned the learning rates and schedulers to maintain training stability, and we maxed-out batch sizes given memory constraints. We set β∅-shape = 10 and βnext-shape = 10 to make the magnitudes of L∅-shape and Lnext-shape comparable to the other loss components for graph-generation. We slowly increase βKL over the course of training from 10−5 to a maximum of 10−1, which we found to provide a reasonable balance between LKL and graph reconstruction. Generation parameters A.15 ADDITIONAL TRAINING DETAILS Dataset. We use molecules from MOSES (Polykovskiy et al., 2020) to train, validate, and test SQUID. Starting from the train/test sets provided by MOSES, we first generate an RDKit conformer for each molecule, and remove any molecules for which we cannot generate a conformer. Conformers are initially created with the ETKDG algorithm in RDKit, and then separately optimized for 200 iterations with the MMFF force field. We then fix the acyclic bond distances and bond angles for each conformer (App. A.8). Using the molecules from MOSES’s train set, we then create the fragment library by extracting the top-100 most frequently occurring fragments (ring-containing substructures without acylic bonds). We separately generate a 3D conformer for each distinct fragment, optimizing the fragment structures with MMFF for 1000 steps. Given these 100 fragments, we then remove all molecules from the train and test sets containing non-included fragments. From the filtered training set, we then extract 24 unique atom types, which we add to the atom/fragment library Lf . We remove any molecule in the test set that contains an atom type not included in these 24. Finally, we randomly split the (filtered) training set into separate training/validation splits. The training split contains 1058352 molecules, the validation split contains 264589 molecules, and the test set contains 146883 molecules. Each molecule has one conformer. Collecting training data for graph generation and scoring. We individually supervise each step of autoregressive graph generation and use teacher forcing. We collect the ground-truth generation actions by representing each molecular graph as a tree whose root tree-node is either a terminal atom or a terminal fragment in the graph. A “terminal” atom is only bonded to one neighboring atom. A “terminal” fragment has only one acyclic (rotatable) bond to a neighboring atom/fragment. Starting from this terminal atom/fragment, we construct the molecule according to a breadth-firstsearch traversal of the generation tree (see Fig. 2); we break ties using RDKit’s canonical atom ordering. We augment the data by enumerating all generation trees starting from each possible terminal atom/fragment in the molecule. For each rotatable bond in the generation trees, we collect regression targets for training the scorer by following the procedure outlined in App. A.2. Batching. When training the graph generator, we batch together graph-generative actions which are part of the same generation sequence (e.g., generating G′(c)l from G ′(c−1) l ). Otherwise, generation sequences are treated independently. When training the rotatable bond scorer, we batch together different query dihedrals ψfoc of the same focal bond. Rather than scoring all 36 rotation angles in the same batch, we include the ground-truth rotation angle and randomly sample 9/35 others to include in the batch. Within each batch (for both graph-generation and scoring), all the encoded molecules MS are constrained to have the same number of atoms, and all the partial molecular structures G ′(c) l are constrained to have the same number of atoms. This restriction on batch composition is purely for convenience: the public implementation of VN-DGCNN from Deng et al. (2021) is designed to train on point clouds with the same number of points, and we construct point clouds by sampling a (fixed) np points for each atom. Training setup. We train the graph generator and the rotatable bond scorer separately. For the graph generator, we train for 2M iterations (batches), with a maximum batch size of 400 (generation sequences). We use the Adam optimizer with default parameters. We use an initial learning rate of 2.5 × 10−4, which we exponentially decay by a factor of 0.9 every 50K iterations to a minimum of 5 × 10−6. We weight the auxiliary losses by βnext-shape = 10.0 and β∅-shape = 10.0. We log-linearly increase βKL from 10−5 to 10−1 over the first 1M iterations, after which it remains constant at 10−1. For each generation sequence, we randomize the rotation angle of the bond connecting the focus to the rest of the partial graph (e.g., the focal dihedral), as this dihedral has yet to be scored. In order to make the graph generator more robust to imperfect rotatable bond scoring at generation time, during training, we perturb the dihedrals of each rotatable bond in the partially generated structure M ′l by δψ ∼ N(µ = 0◦, σ = 15◦) while fixing the coordinates of the focus. For the rotatable bond scorer, we train for 2M iterations (baches), with a maximum batch size of 32 (focal
1. What is the focus of the paper regarding shape-conditioned 3D molecule generation? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its novelty and effectiveness in exploring new objectives or RL methods? 3. Do you have any concerns regarding the model's ability to generate new molecules rather than combining existing models? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper studied shape-conditioned 3D molecule generation, which aims to generate molecules with a desirable shape. The author proposed an encoder-decoder architecture, where the encoder can encode both molecular graph representation and molecular shape representation by point clouds. Specifically, the main difference between the proposed model and existing molecular generative models is additionally taking shape (point clouds) as model inputs. Experiments show that the model can enable the shape-constrained generation and optimization of molecules. Strengths And Weaknesses Strength: The tackled problem is new for the "ML for molecule" community, which hasn't been explored in previous literature. The chemical background and challenges of this task are greatly explained, making the content friendly to the general ML audience. The authors conduct comprehensive experiments and ablations to justify many different Weakness: Overall, the paper is a little hard to follow. As in Fig.2 and Sec.3, the author introduced too many intermediate variables within the neural network parameterizations, with just minor differences in subscripts. This is not very informative and makes the content not coherent enough. I suggest for the methodology part, try to just simply summarize the parameterization of the individual encoders & decoders; besides, make key parts such as the objective function and generation tree more clear and self-contained in the paper. As shown in Sec.3 problem definition paper, the author claims to generate molecules with "similar shape" and "low similarity molecular representation". However, I think overall the proposed model just learns to generate existing molecules, without new objectives or RL methods to explicitly encourage the exploration. The major difference is taking shape as additional inputs. However, as shown in Appendix A.9, actually the performance improvement by taking shapes as inputs are not significant. Furthermore, the model actually is a combination of "molecular graph generative models" and "torsion scoring models". This makes me feel like you should also take existing molecular graph generation VAE models as baselines, such as [1]. For example, input M S into JT-VAE's encoder, and test whether decoder-generated molecules can maybe have competitive results with your model. [1] Jin, Wengong, Regina Barzilay, and Tommi Jaakkola. "Junction tree variational autoencoder for molecular graph generation." In International conference on machine learning, pp. 2323-2332. PMLR, 2018. Clarity, Quality, Novelty And Reproducibility The content can be understood with effort, but clarity can be still significantly improved. See weakness.1 for details. The quality is pretty good. The author provides extensive content for both methods and experiments. The tackled task is new but methodology novelty is limited from ML perspective. See Weaknesses 2&3 for details. The author provides experimental setups in paper, and also submits their code. Reproducibility is pretty good.
ICLR
Title Equivariant Shape-Conditioned Generation of 3D Molecules for Ligand-Based Drug Design Abstract Shape-based virtual screening is widely used in ligand-based drug design to search chemical libraries for molecules with similar 3D shapes yet novel 2D graph structures compared to known ligands. 3D deep generative models can potentially automate this exploration of shape-conditioned 3D chemical space; however, no existing models can reliably generate geometrically realistic drug-like molecules in conformations with a specific shape. We introduce a new multimodal 3D generative model that enables shape-conditioned 3D molecular design by equivariantly encoding molecular shape and variationally encoding chemical identity. We ensure local geometric and chemical validity of generated molecules by using autoregressive fragment-based generation with heuristic bonding geometries, allowing the model to prioritize the scoring of rotatable bonds to best align the growing conformation to the target shape. We evaluate our 3D generative model in tasks relevant to drug design including shape-conditioned generation of chemically diverse molecular structures and shape-constrained molecular property optimization, demonstrating its utility over virtual screening of enumerated libraries. 1 INTRODUCTION Generative models for de novo molecular generation have revolutionized computer-aided drug design (CADD) by enabling efficient exploration of chemical space, goal-directed molecular optimization (MO), and automated creation of virtual chemical libraries (Segler et al., 2018; Meyers et al., 2021; Huang et al., 2021; Wang et al., 2022; Du et al., 2022; Bilodeau et al., 2022). Recently, several 3D generative models have been proposed to directly generate low-energy or (bio)active molecular conformations using 3D convolutional networks (CNNs) (Ragoza et al., 2020), reinforcement learning (RL) (Simm et al., 2020a;b), autoregressive generators (Gebauer et al., 2022; Luo & Ji, 2022), or diffusion models (Hoogeboom et al., 2022). These methods have especially enjoyed accelerated development for structure-based drug design (SBDD), where models are trained to generate druglike molecules in favorable binding poses inside an explicit protein pocket (Drotár et al., 2021; Luo et al., 2022; Liu et al., 2022; Ragoza et al., 2022). However, SBDD requires atomically-resolved structures of a protein target, assumes knowledge of binding sites, and often ignores dynamic pocket flexibility, rendering these methods less effective in many CADD settings. Ligand-based drug design (LBDD) does not assume knowledge of protein structure. Instead, molecules are compared against previously identified “actives” on the basis of 3D pharmacophore or 3D shape similarity under the principle that molecules with similar structures should share similar activity (Vázquez et al., 2020; Cleves & Jain, 2020). In particular, ROCS (Rapid Overlay of Chemical Structures) is commonly used as a shape-based virtual screening tool to identify molecules with similar shapes to a reference inhibitor and has shown promising results for scaffold-hopping tasks (Rush et al., 2005; Hawkins et al., 2007; Nicholls et al., 2010). However, virtual screening relies on enumeration of chemical libraries, fundamentally restricting its ability to probe new chemical space. Here, we consider the novel task of generating chemically diverse 3D molecular structures conditioned on a molecular shape, thereby facilitating the shape-conditioned exploration of chemical space without the limitations of virtual screening (Fig. 1). Importantly, shape-conditioned 3D molecular generation presents unique challenges not encountered in typical 2D generative models: Challenge 1. 3D shape-based LBDD involves pairwise comparisons between two arbitrary conformations of arbitrary molecules. Whereas traditional property-conditioned generative models or MO algorithms shift learned data distributions to optimize a single scalar property, a shape-conditioned generative model must generate molecules adopting any reasonable shape encoded by the model. Challenge 2. Shape similarity metrics that compute volume overlaps between two molecules (e.g., ROCS) require the molecules to be aligned in 3D space. Unlike 2D similarity, the computed shape similarity between the two molecules will change if one of the structures is rotated. This subtly impacts the learning problem: if the model encodes the target 3D shape into an SE(3)-invariant representation, the model must learn how the generated molecule would fit the target shape under the implicit action of an SE(3)-alignment. Alternatively, if the model can natively generate an aligned structure, then the model can more easily learn to construct molecules that fit the target shape. Challenge 3. A molecule’s 2D graph topology and 3D shape are highly dependent; small changes in the graph can strikingly alter the shapes accessible to a molecule. It is thus unlikely that a generative model will reliably generate chemically diverse molecules with similar shapes to an encoded target without 1) simultaneous graph and coordinate generation; and 2) explicit shape-conditioning. Challenge 4. The distribution of shapes a drug-like molecule can adopt is chiefly influenced by rotatable bonds, the foremost source of molecular flexibility. However, existing 3D generative models are mainly developed using tiny molecules (e.g., fewer than 10 heavy atoms), and cannot generate flexible drug-like molecules while maintaining chemical validity (satisfying valencies), geometric validity (non-distorted bond distances and angles; no steric clashes), and chemical diversity. To surmount these challenges, we design a new generative model, SQUID1, to enable the shapeconditioned generation of chemically diverse molecules in 3D. Our contributions are as follows: • Given a 3D molecule with a target shape, we use equivariant point cloud networks to encode the shape into (rotationally) equivariant features. We then use graph neural networks (GNNs) to variationally encode chemical identity into invariant features. By mixing chemical features with equivariant shape features, we can generate diverse molecules in aligned poses that fit the shape. • We develop a sequential fragment-based 3D generation procedure that fixes local bond lengths and angles to prioritize the scoring of rotatable bonds. By massively simplifying 3D coordinate generation, we generate drug-like molecules while maintaining chemical and geometric validity. • We design a rotatable bond scoring network that learns how local bond rotations affect global shape, enabling our decoder to generate 3D conformations that best fit the target shape. We evaluate the utility of SQUID over virtual screening in shape-conditioned 3D molecular design tasks that mimic ligand-based drug design objectives, including shape-conditioned generation of diverse 3D structures and shape-constrained molecular optimization. To inspire further research, we note that our tasks could also be approached with a hypothetical 3D generative model that disentangles latent variables controlling 2D chemical identity and 3D shape, thus enabling zero-shot generation of topologically distinct molecules with similar shapes to any encoded target. 1SQUID: Shape-Conditioned Equivariant Generator for Drug-Like Molecules 2 RELATED WORK Fragment-based molecular generation. Seminal works in autoregressive molecular generation applied language models to generate 1D SMILES strings character-by-character (Gómez-Bombarelli et al., 2018; Segler et al., 2018), or GNNs to generate 2D molecular graphs atom-by-atom (Liu et al., 2018; Simonovsky & Komodakis, 2018; Li et al., 2018). Recent works construct molecules fragment-by-fragment to improve the chemical validity of intermediate graphs and to scale generation to larger molecules (Podda et al., 2020; Jin et al., 2019; 2020). Our fragment-based decoder is related to MoLeR (Maziarz et al., 2022), which iteratively generates molecules by selecting a new fragment (or atom) to add to the partial graph, choosing attachment sites on the new fragment, and predicting new bonds to the partial graph. Yet, MoLeR only generates 2D graphs; we generate 3D molecular structures. Beyond 2D generation, Flam-Shepherd et al. (2022) use an RL agent to generate 3D molecules by sampling and connecting molecular fragments. However, they sample from a small multiset of fragments, restricting the accessible chemical space. Powers et al. (2022) use fragments to generate 3D molecules inside a protein pocket, but only consider 7 distinct rings. Generation of drug-like molecules in 3D. In this work, we generate novel drug-like 3D molecular structures in free space, e.g., not conformers given a known molecular graph (Ganea et al., 2021; Jing et al., 2022). Myriad models have been proposed to generate small 3D molecules such as E(3)-equivariant normalizing flows and diffusion models (Satorras et al., 2022a; Hoogeboom et al., 2022), RL agents with an SE(3)-covariant action space (Simm et al., 2020b), and autoregressive generators that build molecules atom-by-atom with SE(3)-invariant internal coordinates (Luo & Ji, 2022; Gebauer et al., 2022). However, fewer 3D generative models can generate larger drug-like molecules for realistic chemical design tasks. Of these, Hoogeboom et al. (2022) and Arcidiacono & Koes (2021) fail to generate chemically valid molecules, while Ragoza et al. (2020) rely on postprocessing and geometry relaxation to extract stable molecules from their generated atom density grids. Only Roney et al. (2021) and Li et al. (2021), who develop autoregressive generators that simultaneously predict graph structure and internal coordinates, have shown to reliably generate valid drug-like molecules. We also couple graph generation with 3D coordinate prediction; however, we employ fragment-based generation with fixed local geometries to ensure local chemical and geometric validity. Futher, we focus on shape-conditioned molecular design; none of these works can natively address the aforementioned challenges posed by shape-conditioned molecular generation. Shape-conditioned molecular generation. Other works partially address shape-conditioned 3D molecular generation. Skalic et al. (2019) and Imrie et al. (2021) train networks to generate 1D SMILES strings or 2D molecular graphs conditioned on CNN encodings of 3D pharmacophores. However, they do not generate 3D structures, and the CNNs do not respect Euclidean symmetries. Zheng et al. (2021) use supervised molecule-to-molecule translation on SMILES strings for scaffold hopping tasks, but do not generate 3D structures. Papadopoulos et al. (2021) use REINVENT (Olivecrona et al., 2017) on SMILES strings to propose molecules whose conformers are shapesimilar to a target, but they must re-optimize the agent for each target shape. Roney et al. (2021) fine-tune a 3D generative model on the hits of a ROCS virtual screen of > 1010 drug-like molecules to shift the learned distribution towards a target shape. Yet, this expensive screening approach must be repeated for each new target. Instead, we seek to achieve zero-shot generation of 3D molecules with similar shapes to any encoded shape, without requiring fine-tuning or post facto optimization. Equivariant geometric deep learning on point clouds. Various equivariant networks have been designed to encode point clouds for updating coordinates in R3 (Satorras et al., 2022b), predicting tensorial properties (Thomas et al., 2018), or modeling 3D structures natively in Cartesian space (Fuchs et al., 2020). Especially noteworthy are architectures which lift scalar neuron features to vector features in R3 and employ simple operations to mix invariant and equivariant features without relying on expensive higher-order tensor products or Clebsh-Gordan coefficients (Deng et al., 2021; Jing et al., 2021). In this work, we employ Deng et al. (2021)’s Vector Neurons (VN)-based equivariant point cloud encoder VN-DGCNN to encode molecules into equivariant latent representations in order to generate molecules which are natively aligned to the target shape. Two recent works also employ VN operations for structure-based drug design and linker design (Peng et al., 2022; Huang et al., 2022). Huang et al. (2022) also build molecules in free space; however, they generate just a few atoms to connect existing fragments and do not condition on molecular shape. 3 METHODOLOGY Problem definition. We model a conditional distribution P (M |S) over 3D moleculesM = (G,G) with graph G and atomic coordinates G = {ra ∈ R3} given a 3D molecular shape S. Specifically, we aim to sample molecules M ′ ∼ P (M |S) with high shape similarity (simS(M ′,MS) ≈ 1) and low graph (chemical) similarity (simG(M ′,MS) < 1) to a target molecule MS with shape S. This scheme differs from 1) typical 3D generative models that learn P (M) without modeling P (M |S), and from 2) shape-conditioned 1D/2D generators that attempt to model P (G|S), the distribution of molecular graphs that could adopt shape S, but do not actually generate specific 3D conformations. We define graph (chemical) similarity simG ∈ [0, 1] between two molecules as the Tanimoto similarity computed by RDKit with default settings (2048-bit fingerprints). We define shape similarity sim∗S ∈ [0, 1] using Gaussian descriptions of molecular shape, modeling atoms a ∈ MA and b ∈ MB from molecules MA and MB as isotropic Gaussians in R3 (Grant & Pickup, 1995; Grant et al., 1996). We compute sim∗S using (2-body) volume overlaps between atom-centered Gaussians: sim∗S(GA,GB) = VAB VAA + VBB − VAB ; VAB = ∑ a∈A,b∈B Vab; Vab ∝ exp ( −α 2 ||ra−rb||2 ) , (1) where α controls the Gaussian width. Setting α = 0.81 approximates the shape similarity function used by the ROCS program (App. A.6). sim∗S is sensitive to SE(3) transformations of molecule MA with respect to moleculeMB . Thus, we define simS(MA,MB) = max R,t sim∗S(GAR+t,GB) as the shape similarity when MA is optimally aligned to MB . We perform such alignments with ROCS. Approach. At a high level, we model P (M |S) with an encoder-decoder architecture. Given a molecule MS = (GS ,GS) with shape S, we encode S (a point cloud) into equivariant features. We then variationally encode GS into atomic features, conditioned on the shape features. We then mix these shape and atom features to pass global SE(3) {in,equi}variant latent codes to the decoder, which samples new molecules from P (M |S). We autoregressively generate molecules by factoring P (M |S) = P (M0|S)P (M1|M0, S)...P (M |Mn−1, S), where each Ml = (Gl,Gl) are partial molecules defined by a BFS traversal of a tree-representation of the molecular graph (Fig. 2). Tree-nodes denote either non-ring atoms or rigid (ring-containing) fragments, and tree-links denote acyclic (rotatable, double, or triple) bonds. We generate Ml+1 by growing the graph Gl+1 around a focus atom/fragment, and then predict Gl+1 by scoring a query rotatable bond to best fit shape S. Simplifying assumptions. (1) We ignore hydrogens and only consider heavy atoms, as is common in molecular generation. (2) We only consider molecules with fragments present in our fragment library to ensure that graph generation can be expressed as tree generation. (3) Rather than generating all coordinates, we use rigid fragments, fix bond distances, and set bond angles according to hybridization heuristics (App. A.8); this lets the model focus on scoring rotatable bonds to best fit the growing conformer to the encoded shape. (4) We seed generation with M0 (the root tree-node), restricted to be a small (3-6 atoms) substructure from MS ; hence, we only model P (M |S,M0). 3.1 ENCODER Featurization. We construct a molecular graph G using atoms as nodes and bonds as edges. We featurize each node with the atomic mass; one-hot codes of atomic number, charge, and aromaticity; and one-hot codes of the number of single, double, aromatic, and triple bonds the atom forms (including bonds to implicit hydrogens). This helps us fix bond angles during generation (App. A.8). We featurize each edge with one-hot codes of bond order. We represent a shape S as a point cloud built by sampling np points from each of nh atom-centered Gaussians with (adjustable) variance σ2p. Fragment encoder. We also featurize each node with a learned embedding fi ∈ Rdf of the atom/fragment type to which that atom belongs, making each node “fragment-aware” (similar to MoLeR). In principle, fragments could be any rigid substructure with ≥ 2 atoms. Here, we specify fragments as ring-containing substructures without acyclic single bonds (Fig. 14). We construct a library Lf of atom/fragment types by extracting the top-k (k = 100) most frequent fragments from the dataset and adding these, along with each distinct atom type, to Lf (App. A.13). We then encode each atom/fragment in Lf with a simple GNN (App. A.12) to yield the global atom/fragment embeddings: {fi = ∑ a h (a) fi , {h(a)fi } = GNNLf (Gfi) ∀fi ∈ Lf}, where h (a) fi are per-atom features. Shape encoder. Given MS with nh heavy atoms, we use VN-DGCNN (App. A.11) to encode the molecular point cloud PS ∈ R(nhnp)×3 into a set of equivariant per-point vector features X̃p ∈ R(nhnp)×q×3. We then locally mean-pool the np equivariant features per atom: X̃p = VN-DGCNN(PS); X̃ = LocalPool(X̃p), (2) where X̃ ∈ Rnh×q×3 are per-atom equivariant representations of the molecular shape. Because VN operations are SO(3)-equivariant, rotating the point cloud will rotate X̃: X̃R = LocalPool(VN-DGCNN(PSR)). Although VN operations are strictly SO(3)-equivariant, we subtract the molecule’s centroid from the atomic coordinates prior to encoding, making X̃ effectively SE(3)-equivariant. Throughout this work, we denote SO(3)-equivariant vector features with tildes. Variational graph encoder. To model P (M |S), we first use a GNN (App. A.12) to encodeGS into learned atom embeddings H = {h(a) ∀ a ∈ GS}. We condition the GNN on per-atom invariant shape features X = {x(a)} ∈ Rnh×6q , which we form by passing X̃ through a VN-Inv (App. A.11): H = GNN((H0,X);GS); X = VN-Inv(X̃), (3) where H0 ∈ Rnh×(da+df ) are the set of initial atom features concatenated with the learned fragment embeddings, H ∈ Rnh×dh , and (·, ·) denotes concatenation in the feature dimension. For each atom in MS , we then encode h (a) µ ,h (a) log σ2 = MLP(h (a)) and sample h(a)var ∼ N(h(a)µ ,h(a)σ ): Hvar = { h(a)var = h (a) µ + ϵ (a) ⊙ h(a)σ ; h(a)σ = exp( 1 2 h (a) log σ2) ∀ a ∈ GS } , (4) where ϵ(a) ∼ N(0,1) ∈ Rdh , Hvar ∈ Rnh×dh , and ⊙ denotes elementwise multiplication. Here, the second argument of N(·, ·) is the standard deviation vector of the diagonal covariance matrix. Mixing shape and variational features. The variational atom features Hvar are insensitive to rotations of S. However, we desire the decoder to construct molecules in poses that are natively aligned to S (Challenge 2). We achieve this by conditioning the decoder on an equivariant latent representation of P (M |S) that mixes both shape and chemical information. Specifically, we mix Hvar with X̃ by encoding each h(a)var ∈ Hvar into linear transformations, which are applied atom-wise to X̃. We then pass the mixed equivariant features through a separate VN-MLP (App. A.11): X̃Hvar = { VN-MLP(W(a)H X̃ (a), X̃(a)); W (a) H = Reshape(MLP(h (a) var )) ∀ a ∈ GS } , (5) where W(a)H ∈ Rq ′×q , X̃(a) ∈ Rq×3, and X̃Hvar ∈ Rnh×dz×3. This maintains equivariance since W (a) H are rotationally invariant and W (a) H (X̃ (a)R) = (W (a) H X̃ (a))R for a rotation R. Finally, we sum-pool the per-atom features in X̃Hvar into a global equivariant representation Z̃ ∈ Rdz×3. We also embed a global invariant representation z ∈ Rdz by applying a VN-Inv to X̃Hvar , concatenating the output with Hvar, passing through an MLP, and sum-pooling the resultant per-atom features: Z̃ = ∑ a X̃ (a) Hvar ; z = ∑ a MLP(x(a)Hvar ,h (a) var ); x (a) Hvar = VN-Inv(X̃(a)Hvar). (6) 3.2 DECODER Given MS , we sample new molecules M ′ ∼ P (M |S,M0) by encoding PS into equivariant shape features X̃, variationally sampling h(a)var for each atom in MS , mixing Hvar with X̃, and passing the resultant (Z̃, z) to the decoder. We seed generation with a small structure M0 (extracted from MS), and build M ′ by sequentially generating larger structures M ′l+1 in a tree-like manner (Fig. 2). Specifically, we grow new atoms/fragments around a “focus” atom/fragment in M ′l , which is popped from a BFS queue. To generate M ′l+1 from M ′ l (e.g., grow the tree from the focus), we factor P (Ml+1|Ml, S) = P (Gl+1|Ml, S)P (Gl+1|Gl+1,Ml, S). Given (Z̃, z), we sample the new graphG′l+1 by iteratively attaching (a variable)C new atoms/fragments (children tree-nodes) around the focus, yielding G′(c)l for c = 1, ..., C, where G ′(C) l = G ′ l+1 and G ′(0) l = G ′ l. We then generate coordinates G′l+1 by scoring the (rotatable) bond between the focus and its parent tree-node. New bonds from the focus to its children are left unscored in M ′l+1 until the children become “in focus”. Partial molecule encoder. Before bonding each new atom/fragment to the focus (or scoring bonds), we encode the partial molecule M ′(c−1)l with the same scheme as for MS (using a parallel encoder; Fig. 2), except we do not variationally embed H′.2 Instead, we process H′ analogously to Hvar. Further, in addition to globally pooling the per-atom embeddings to obtain Z̃′ = ∑ a X̃ ′(a) H and z′ = ∑ a x ′(a) H , we also selectively sum-pool the embeddings of the atom(s) in focus, yielding Z̃′foc = ∑ a∈focus X̃ ′(a) H and z ′ foc = ∑ a∈focus x ′(a) H . We then align the equivariant representations of M ′(c−1) l and MS by concatenating Z̃, Z̃ ′, Z̃− Z̃′, and Z̃′foc and passing these through a VN-MLP: Z̃dec = VN-MLP(Z̃, Z̃′, Z̃− Z̃′, Z̃′foc). (7) Note that Z̃dec ∈ Rq×3 is equivariant to rotations of the overall system (M ′(c−1)l ,MS). Finally, we form a global invariant feature zdec ∈ Rddec to condition graph (or coordinate) generation: zdec = (VN-Inv(Z̃dec), z, z′, z− z′, z′foc). (8) Graph generation. We factor P (Gl+1|Ml, S) into a sequence of generation steps by which we iteratively connect children atoms/fragments to the focus until the network generates a (local) stop token. Fig. 2 sketches a generation sequence by which a new atom/fragment is attached to the focus, yielding G′(c)l from G ′(c−1) l . Given zdec, the model first predicts whether to stop (local) generation via p∅ = sigmoid(MLP∅(zdec)) ∈ (0, 1). If p∅ ≥ τ∅ (a threshold, App. A.16), we stop and proceed to bond scoring. Otherwise, we select which atom afoc on the focus (if multiple) to grow from: pfocus = softmax({MLPfocus(zdec,x′(a)H ) ∀ a ∈ focus}). (9) The decoder then predicts which atom/fragment fnext ∈ Lf to connect to the focus next: pnext = softmax({MLPnext(zdec,x′(afoc)H , ffi) ∀ fi ∈ Lf}). (10) 2We have dropped the (c) notation for clarity. However, each Zdec is specific to each (M ′(c−1) l ,MS) system. If the selected fnext is a fragment, we predict the attachment site asite on the fragment Gfnext : psite = softmax({MLPsite(zdec,x′(afoc)H , fnext,h (a) fnext ) ∀ a ∈ Gfnext}), (11) where h(a)fnext are the encoded atom features for Gfnext . Lastly, we predict the bond order (1 ◦, 2◦, 3◦) via pbond = softmax(MLPbond(zdec,x ′(afoc) H , fnext,h (asite) fnext )). We repeat this sequence of steps until p∅ ≥ τ∅, yielding Gl+1. At each step, we greedily select the action after masking actions that violate known chemical valence rules. After each sequence, we bond a new atom or fragment to the focus, giving G′(c)l . If an atom, the atom’s position relative to the focus is fixed by heuristic bonding geometries (App. A.8). If a fragment, the position of the attachment site is fixed, but the dihedral of the new bond is yet unknown. Thus, in subsequent generation steps we only encode the attachment site and mask the remaining atoms in the new fragment until that fragment is “in focus” (Fig. 2). This means that prior to bond scoring, the rotation angle of the focus is random. To account for this when training (with teacher forcing), we randomize the focal dihedral when encoding eachM ′(c−1)l . Scoring rotatable bonds. After sampling G′l+1 ∼ P (Gl+1|M ′l , S), we generate G′l+1 by scoring the rotation angle ψ′l+1 of the bond connecting the focus to its parent node in the generation tree (Fig. 2). Since we ultimately seek to maximize simS(M ′,MS), we exploit the fact that our model generates shape-aligned structures to predict max ψ′l+2,ψ ′ l+3,... sim∗S(G ′(ψfoc),GS) for various query di- hedrals ψ′l+1 = ψfoc of the focus rotatable bond in a supervised regression setting. Intuitively, the scorer is trained to predict how the choice of ψfoc affects the maximum possible shape similarity of the final molecule M ′ to the target MS under an optimal policy. App. A.2 details how regression targets are computed. During generation, we sweep over each query ψfoc ∈ [−π, π), encode each resultant structure M ′(ψfoc) l+1 into z (ψfoc) dec, scorer 3, and select the ψfoc that maximizes the predicted score: ψ′l+1 = argmax ψfoc sigmoid(MLPscorer(z (ψfoc) dec, scorer)). (12) At generation time, we also score chirality by enumerating stereoisomers Gχfoc ∈ G′foc of the focus and selecting the (Gχfoc, ψfoc) that maximizes Eq. 12 (App. A.2). Training. We supervise each step of graph generation with a multi-component loss function: Lgraph-gen = L∅+Lfocus+Lnext+Lsite+Lbond+βKLLKL+βnext-shapeLnext-shape+β∅-shapeL∅-shape. (13) L∅, Lfocus, Lnext, and Lbond are standard cross-entropy losses. Lsite = − log( ∑ a p (a) site I[ca > 0]) is a modified cross-entropy loss that accounts for symmetric attachment sites in the fragmentsGfi ∈ Lf , where p(a)site are the predicted attachment-site probabilities and ca are multi-hot class probabilities. LKL is the KL-divergence between the learned N (hµ,hσ) and the prior N (0,1). We also employ two auxiliary losses Lnext-shape and L∅-shape in order to 1) help the generator distinguish between incorrect shape-similar (near-miss) vs. shape-dissimilar fragments, and 2) encourage the generator to generate structures that fill the entire target shape (App. A.10). We train the rotatable bond scorer separately from the generator with an MSE regression loss. See App. A.15 for training details. 4 EXPERIMENTS Dataset. We train SQUID with drug-like molecules (up to nh = 27) from MOSES (Polykovskiy et al., 2020) using their train/test sets. Lf includes 100 fragments extracted from the dataset and 24 atom types. We remove molecules that contain excluded fragments. For remaining molecules, we generate a 3D conformer with RDKit, set acyclic bond distances to their empirical means, and fix acyclic bond angles using heuristic rules. While this 3D manipulation neglects distorted bonding geometries in real molecules, the global shapes are marginally impacted, and we may recover refined geometries without seriously altering the shape (App. A.8). The final dataset contains 1.3M 3D molecules, partitioned into 80/20 train/validation splits. The test set contains 147K 3D molecules. 3We train the scorer independently from the graph generator, but with a parallel architecture. Hence, zdec ̸= zdec, scorer. The main architectural difference between the two models (graph generator and scorer) is that we do not variationally encode Hscorer into Hvar,scorer, as we find it does not impact empirical performance. In the following experiments, we only consider molecules MS for which we can extract a small (3-6 atoms) 3D substructure M0 containing a terminal atom, which we use to seed generation. In principle,M0 could include larger structures fromMS , e.g., for scaffold-constrained tasks. Here, we use the smallest substructures to ensure that the shape-conditioned generation tasks are not trivial. Shape-conditioned generation of chemically diverse molecules. “Scaffold-hopping”—designing molecules with high 3D shape similarity but novel 2D graph topology compared to known inhibitors—is pursued in LBDD to develop chemical lead series, optimize drug activity, or evade intellectual property restrictions (Hu et al., 2017). We imitate this task by evaluating SQUID’s ability to generate molecules M ′ with high simS(M ′,MS) but low simG(M ′,MS). Specifically, for 1000 molecules MS with target shapes S in the test set, we use SQUID to generate 50 molecules per MS . To generate chemically diverse species, we linearly interpolate between the posterior N(hµ,hσ) and the prior N(0,1), sampling each hvar ∼ N((1− λ)hµ, (1− λ)hσ + λ1) using either λ = 0.3 or λ = 1.0 (prior). We then filter the generated molecules to have simG(M ′,MS) < 0.7, or < 0.3 to only evaluate molecules with substantial chemical differences compared to MS . Of the filtered molecules, we randomly choose Nmax samples and select the sample with highest simS(M ′,MS). Figure 3A plots distributions of simS(M ′,MS) between the selected molecules and their respective target shapes, using different sampling (Nmax = 1, 20) and filtering (simG(M ′,MS) < 0.7, 0.3) schemes. We compare against analogously sampling random 3D molecules from the training set. Overall, SQUID generates diverse 3D molecules that are quantitatively enriched in shape similarity compared to molecules sampled from the dataset, particularly for Nmax = 20. Qualitatively, the molecules generated by SQUID have significantly more atoms which directly overlap with the atoms of MS , even in cases where the computed shape similarity is comparable between SQUIDgenerated molecules and molecules sampled from the dataset (Fig. 3C). We quantitatively explore this observation in App. A.7. We also find that using λ = 0.3 yields greater simS(M ′,MS) than λ = 1.0, in part because using λ = 0.3 yields less chemically diverse molecules (Fig. 3B; Challenge 3). Even so, sampling Nmax = 20 molecules from the prior with simG(M ′,MS) < 0.3 still yields more shape-similar molecules than sampling Nmax = 500 molecules from the dataset. We emphasize that 99% of samples from the prior are novel, 95% are unique, and 100% are chemically valid (App. A.4). Moreover, 87% of generated structures do not have any steric clashes (App. A.4), indicating that SQUID generates realistic 3D geometries of the flexible drug-like molecules. Ablating equivariance. SQUID’s success in 3D shape-conditioned molecular generation is partly attributable to SQUID aligning the generated structures to the target shape in equivariant feature space (Eq. 7), which enables SQUID to generate 3D structures that fit the target shape without having to implicitly learn how to align two structures in R3 (Challenge 2). We explicitly validate this design choice by setting Z̃ = 0 in Eq. 7, which prevents the decoder from accessing the 3D orientation of MS during training/generation. As expected, ablating SQUID’s equivariance reduces the enrichment in shape similarity (relative to the dataset baseline) by as much as 33% (App. A.9). Shape-constrained molecular optimization. Scaffold-hopping is often goal-directed; e.g., aiming to reduce toxicity or improve bioactivity of a hit compound without altering its 3D shape. We mimic this shape-constrained MO setting by applying SQUID to optimize objectives from GaucaMol (Brown et al., 2019) while preserving high shape similarity (simS(M,MS) ≥ 0.85) to various “hit” 3D molecules MS from the test set. This task considerably differs from typical MO tasks, which optimize objectives without constraining 3D shape and without generating 3D structures. To adapt SQUID to shape-constrained MO, we implement a genetic algorithm (App. A.5) that iteratively mutates the variational atom embeddings Hvar of encoded seed molecules (“hits”) MS in order to generate 3D molecules M∗ with improved objective scores, but which still fit the shape of MS . Table 1 reports the optimized top-1 scores across 6 objectives and 8 seed molecules MS (per objective, sampled from the test set), constrained such that simS(M∗,MS) ≥ 0.85. We compare against the score ofMS , as well as the (shape-constrained) top-1 score obtained by virtual screening (VS) our training dataset (>1M 3D molecules). Of the 8 seeds MS per objective, 3 were selected from top-scoring molecules to serve as hypothetical “hits”, 3 were selected from top-scoring large molecules (≥ 26 heavy atoms), and 2 were randomly selected from all large molecules. In 40/48 tasks, SQUID improves the objective score of the seed MS while maintaining simS(M∗,MS) ≥ 0.85. Qualitatively, SQUID optimizes the objectives through chemical alterations such as adding/deleting individual atoms, switching bonding patterns, or replacing entire substructures – all while generating 3D structures that fit the target shape (App. A.5). In 29/40 of successful cases, SQUID (limited to 31K samples) surpasses the baseline of virtual screening 1M molecules, demonstrating the ability to efficiently explore new shape-constrained chemical space. 5 CONCLUSION We designed a novel 3D generative model, SQUID, to enable shape-conditioned exploration of chemically diverse molecular space. SQUID generates realistic 3D geometries of larger molecules that are chemically valid, and uniquely exploits equivariant operations to construct conformations that fit a target 3D shape. We envision our model, alongside future work, will advance creative shape-based drug design tasks such as 3D scaffold hopping and shape-constrained 3D ligand design. REPRODUCIBILITY STATEMENT We have taken care to facilitate the reproduciblility of this work by detailing the precise architecture of SQUID throughout the main text; we also provide extensive details on training protocols, model parameters, and further evaluations in the Appendices. Our source code can be found at https://github.com/keiradams/SQUID. Beyond the model implementation, our code includes links to access our datasets, as well as scripts to process the training dataset, train the model, and evaluate our trained models across the shape-conditioned generation and shape-constrained optimization tasks described in this paper. ETHICS STATEMENT Advancing the shape-conditioned 3D generative modeling of drug-like molecules has the potential to accelerate pharmaceutical drug design, showing particular promise for drug discovery campaigns involving scaffold hopping, hit expansion, or the discovery of novel ligand analogues. However, such advancements could also be exploited for nefarious pharmaceutical research and harmful biological applications. ACKNOWLEDGMENTS This research was supported by the Office of Naval Research under grant number N00014-21-12195. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 2141064. The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the research results reported within this poster. The authors thank Rocı́o Mercado, Sam Goldman, Wenhao Gao, and Lagnajit Pattanaik for providing helpful suggestions regarding the content and presentation of this paper. A APPENDIX CONTENTS A Appendix 15 A.1 Overview of definitions, terms, and notations . . . . . . . . . . . . . . . . . . . . 16 A.2 Scoring rotatable bonds and stereochemistry . . . . . . . . . . . . . . . . . . . . . 18 A.3 Random examples of generated 3D molecules . . . . . . . . . . . . . . . . . . . . 20 A.4 Generation statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 A.5 Shape-constrained molecular optimization . . . . . . . . . . . . . . . . . . . . . . 24 A.5.1 Genetic algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 A.5.2 Visualization of optimized molecules . . . . . . . . . . . . . . . . . . . . 24 A.6 Comparing simS to ROCS scoring function . . . . . . . . . . . . . . . . . . . . . 27 A.7 Exploring different values of α in simS . . . . . . . . . . . . . . . . . . . . . . . 28 A.8 Heuristic bonding geometries and their impact on global shape . . . . . . . . . . . 29 A.9 Ablating equivariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 A.10 Auxiliary training losses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 A.11 Overview of Vector Neurons (VN) operations . . . . . . . . . . . . . . . . . . . . 33 A.12 Graph neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 A.13 Fragment library. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 A.14 Model parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 A.15 Additional training details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 A.16 Additional generation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 A.17 Relaxation of generated geometries . . . . . . . . . . . . . . . . . . . . . . . . . . 43 A.18 Comparison to LigDream (Skalic et al., 2019) . . . . . . . . . . . . . . . . . . . . 45 A.1 OVERVIEW OF DEFINITIONS, TERMS, AND NOTATIONS A.2 SCORING ROTATABLE BONDS AND STEREOCHEMISTRY Recall that our goal is to train the scorer to predict max ψ′l+2,ψ ′ l+3,... sim∗S(G ′(ψfoc),GS) for various query dihedrals ψ′l+1 = ψfoc. That is, we wish to predict the maximum possible shape similarity of the final molecule M ′ to MS when fixing ψ′l+1 = ψfoc and optimally rotating all the yet-to-be-scored (or generated) rotatable bond dihedrals ψ′l+2, ψ ′ l+3, ... so as to maximize sim ∗ S(G ′(ψfoc),GS). Training. We train the scorer independently from the graph generator (with a parallel architecture) using a mean squared error loss between the predicted scores ŝ(ψfoc)dec, scorer = sigmoid(MLP(z (ψfoc) dec, scorer)) and the regression targets s(ψfoc) for Ns different query dihedrals ψfoc ∈ [−π, π): Lscorer = 1 Ns Ns∑ i=1 (s(ψ (i) foc ) − ŝ(ψ (i) foc ) dec, scorer) 2 (14) Computing regression targets. When training with teacher forcing (M ′l = MSl , G′ = GS), we compute regression targets sψfoc ≈ max ψl+2,ψl+3,... sim∗S(G ′(ψfoc),GS) by setting the focal dihedral ψl+1 = ψfoc, sampling Nψ conformations of the “future” graph GTfoc induced by the subtree Tfoc whose root (sub)tree-node is the focus, and computing sψfoc = max i=0,...,Nψ sim∗S(G (i) Tfoc ,GSTfoc ;α = 2.0). Since we fix bonding geometries, we need only sample Nψ sets of dihedrals of the rotatable bonds in GSTfoc to sample Nψ conformers, making this conformer enumeration very fast. Note that rather than using α = 0.81 in these regression targets, we use α = 2.0 to make the scorer more sensitive to shape differences (App. A.7). When computing regression targets, we use Nψ < 1800 and select 36 (evenly spaced)ψfocus ∈ [−π, π) per rotatable bond. Figure 4 visualizes how regression targets are computed. App. A.15 contains further training specifics. Scoring stereochemistry. At generation time, we also enumerate all possible stereoisomers of the focus (except cis/trans bonds) and score each stereoisomer separately, ultimately selecting the (stereoisomer, ψfoc) pair that maximizes the predicted score. Figure 5 illustrates how we enumerate stereoisomers. Note that although we use the learned scoring function to score stereoisomerism at generation time, we do not explicitly train the scorer to score different stereoisomers. Masking severe steric clashes. At generation time, we do not score any query dihedral ψfoc that causes a severe steric clash (< 1Å) with the existing partially generated structure (unless all query dihedrals cause a severe clash). A.3 RANDOM EXAMPLES OF GENERATED 3D MOLECULES Figures 6 and 7 show additional random examples of molecules generated by SQUID when sampling Nmax = 1, 20 molecules with simG(M ′,MS) < 0.7 from the prior (λ = 1.0) or λ = 0.3 and selecting the sample with the highest simS(M ′,MS). Note that the visualized poses of the generated conformers are those which are directly generated by SQUID; the generated conformers have not been explicitly aligned to MS (e.g., using ROCS). Even so, the conformers are (for the most part) aligned toMS since SQUID’s equivariance enables the model to generate natively aligned structures. It is apparent in these examples that using larger Nmax yields molecules with significantly improved shape similarity to MS , both qualitatively and quantitatively. This is in part caused by: 1) stochasticity in the variationally sampled atom embeddings Hvar; 2) stochasticity in the input molecular point clouds, which are sampled from atom-centered isotropic Gaussians in R3; 3) sampling sets of variational atom embeddings that may not be entirely self-consistent (e.g., for instance, if we sample only 1 atom embedding that implicitly encodes a ring structure); and 4) the choice of τ∅, the threshold for stopping local generation. While a small τ∅ (we use τ∅ = 0.01) helps prevent the model from adding too many atoms or fragments around a single focus, a small τ∅ can also lead to early (local) stoppage, yielding molecules that do not completely fill the target shape. By sampling more molecules (using largerNmax), we have more chances to avoid these adverse random effects. Further work will attempt to improve the robustness of the encoding scheme and generation procedure in order to increase SQUID’s overall sample efficiency. A.4 GENERATION STATISTICS Table 4 reports the percentage of molecules that are chemically valid, novel, and unique when sampling 50 molecules from the prior (λ = 1.0) for 1000 encoded molecules MS (e.g., target shapes) from the test set, yielding a total of 50K generated molecules. We define chemical validity as passing RDKit sanitization. Since we directly generate the molecular graph and mask actions which violate chemical valency, 100% of generated molecules are valid. We define novelty as the percentage of generated molecules whose molecular graphs are not present in the training data. We define uniqueness as the percentage of generated molecular graphs (of the 50K total) that are only generated once. For novelty and uniqueness calculations, we consider different stereoisomers to have the same molecular graph. We also report the percentage of generated 3D structures that have an apparent steric clash, defined to be a non-bonded interatomic distance below 2Å. When sampling from the prior (λ = 1.0), the average internal chemical similarity of the generated molecules is 0.26± 0.04. When sampling with λ = 0.3, the average internal chemical similarity is 0.32 ± 0.07. We define internal chemical similarity to be the average pairwise chemical similarity (Tanimoto fingerprint similarity) between molecules that are generated for the same target shape. Table 5 reports the graph reconstruction accuracy when sampling 3D molecules from the posterior (λ = 0.0), for 1000 target molecules MS from the test set. We report the top-k graph reconstruction accuracy (ignoring stereochemical differences) when sampling k = 1 molecule per encoded MS , and when sampling k = 20 molecules per encoded MS . Since we have intentionally trained SQUID inside a shape-conditioned variational autoencoder framework in order to generate chemically diverse molecules with similar 3D shapes, the significance of graph reconstruction accuracy is debatable in our setting. However, it is worth noting that the top-1 reconstruction accuracy is 16.3%, while the top-20 reconstruction accuracy is much higher (57.2%). This large difference is likely attributable to both stochasticity in the variational atom embeddings and stochasticity in the input 3D point clouds. A.5 SHAPE-CONSTRAINED MOLECULAR OPTIMIZATION A.5.1 GENETIC ALGORITHM We adapt SQUID to shape-constrained molecular optimization by implementing a genetic algorithm on the variational atom embeddings Hvar. Algorithm 1 details the exact optimization procedure. In summary, given the seed molecule MS with a target 3D shape and an initial substructure M0 (which is contained by all generated molecules for a given MS), we first generate an initial population of generated molecules M ′ by repeatedly sampling Hvar for various interpolation factors λ, mixing these Hvar with the encoded shape features of MS , and decoding new 3D molecules. We only add a generated molecule to the population if simS(M ′,MS) ≥ τS (we use τS = 0.75), so that the GA does not overly explore regions of chemical space that have no chance of satisfying the ultimate constraint simS(M ′,MS) ≥ 0.85. After generating the initial population, we iteratively 1) select the top-scoring samples in the population, 2) cross the top-scoring Hvar in crossover events, 3) mutate the top and crossed Hvar via adding random noise, and 4) generate new molecules M ′ for each mutated Hvar. The final optimized molecule M∗ is the top-scoring generated molecule that satisfies the shape-similarity constraint simS(M ′,MS) ≥ 0.85. A.5.2 VISUALIZATION OF OPTIMIZED MOLECULES Figure 8 visualizes the structures of the SQUID-optimized molecules M∗ and their respective seed molecules MS (e.g., the starting “hit” molecules with target shapes) for each of the optimization tasks which led to an improvement in the objective score. We also overlay the generated 3D conformations of M∗ on those of MS , and report the objective scores for each M∗ and MS . Algorithm 1 Genetic algorithm for shape-constrained optimization with SQUID Given: MS with nH heavy atoms,M0, objective oracle O Params: τS , τG, Ne, NT , Nc ▷ Defaults: τS = 0.75, τG = 0.95, Ne = 20, NT = 20, Nc = 10 Hµ,Hσ = Encode(MS) ▷ Encode target molecule Initialize population P = {(MS ,Hµ)} for λ ∈ [0.0, 0.2, 0.4, 0.6, 0.8, 1.0] do ▷ Create initial population of (M ′,Hvar) for i = 1, ..., 100 do Sample noise ϵ ∈ R(nH×dh) ∼ N(0,1) Hvar = (1− λ)Hµ + ϵ⊙ ((1− λ)Hσ + λ1) ▷ Mutate variational atom embeddings z, Z̃ = Encode(MS ;Hvar) ▷ Mix mutated chemical and shape information M ′ = Decode(M0, z, Z̃) ▷ Generate mutated molecule M ′ Compute simS(M ′,MS) if simS(M ′,MS) >= τS then ▷ Add M ′ to population only if simS(M ′,MS) is high Add (M ′,Hvar) to P end if end for end for for e = 1, ..., Ne do ▷ For each evolution Construct Psorted by sorting P by O(M) ▷ Sort population by objective score, high to low. Initialize TM = {}, THvar = {} for (M,Hvar) ∈ Psorted do ▷ Collect top-NT scoring (M ′,Hvar) if (simG(M,MT ) < τG ∀ MT ∈ TM ) and (|T | < NT ) then Add M to TM Add Hvar to THvar end if end for Initialize TC = {} for c = 1, ..., Nc do ▷ Add crossovers to set of top-scoring Hvar Sample Hi ∈ THvar , Hj ̸=i ∈ THvar Hc = CROSS(Hi,Hj) ▷ Cross by randomly swapping half of the atom embeddings Add Hc to TC end for THvar = THvar ∪ TC for Hvar ∈ THvar do ▷ Adding to population for λ ∈ [0.0, 0.2, 0.4, 0.6, 0.8, 1.0] do for i = 1, ..., 10 do Sample noise ϵ ∈ R(nH×dh) ∼ N(0,1) Hvar = (1− λ)Hvar + ϵ ▷ Mutate variational atom embeddings z, Z̃ = Encode(MS ;Hvar) ▷ Mix mutated chemical and shape information M ′ = Decode(M0, z, Z̃) ▷ Generate mutated molecule M ′ Compute simS(M ′,MS) if simS(M ′,MS) >= τS then ▷ Add M ′ to P only if simS(M ′,MS) is high Add (M ′,Hvar) to P end if end for end for end for end for return M∗ = argmax M ′∈P O(M ′) subject to simS(M ′,MS) >= 0.85 A.6 COMPARING SIMS TO ROCS SCORING FUNCTION Our shape similarity function described in Equation 1 closely approximates the shape (only) scoring function employed by ROCS, when α = 0.81. Figure 9 demonstrates the near-perfect correlation between our computed shape scores and those computed by ROCS for 50,000 shape comparisons, with a mean absolute error of 0.0016. Note that Equation 1 computes non-aligned shape similarity. We still employ ROCS to align the generated molecules M ′ to the target molecule MS before computing their (aligned) shape similarity in our experiments. However, we do not require explicit alignment when training SQUID; we do not use the commercial ROCS program during training. A.7 EXPLORING DIFFERENT VALUES OF α IN SIMS Our analysis of shape similarity thus far has used Equation 1 with α = 0.81 in order to recapitulate the shape similarity function used by ROCS, which is widely used in drug discovery. However, compared to randomly sampled molecules in the dataset, the molecules generated by SQUID qualitatively appear to do a significantly better job at fitting the target shape S on an atom-by-atom basis, even if the computed shape similarities (with α = 0.81) are comparable (see examples in Figure 3). We quantify this observation by increasing the value of α when computing simS(M ′,MS ;α) for generated molecules M ′, as α is inversely related to the width of the isotropic 3D Gaussians used in the volume overlap calculations in Equation 1. Intuitively, increasing α will greater penalize simS if the atoms of M ′ and MS do not perfectly align. Figure 10 plots the mean simS(M,MS ;α) for the most shape-similar molecule M of Nmax sampled molecules M ′ for increasing values of α. Averages are calculated over 1000 target molecules MS from the test set, and we only consider generated molecules for which simG(M ′,MS) < 0.7. Crucially, the gap between the mean simS(M,MS ;α) obtained by generating molecules with SQUID vs. randomly sampling molecules from the dataset significantly widens with increasing α. This effect is especially apparent when using SQUID with λ = 0.3 and Nmax = 20, although can be observed with other generation strategies as well. Hence, SQUID does a much better job at generating (still chemically diverse) molecules that have significant atom-to-atom overlap with MS . A.8 HEURISTIC BONDING GEOMETRIES AND THEIR IMPACT ON GLOBAL SHAPE In all molecules (dataset and generated) considered in this work, we fix acyclic bond distances to their empirical averages and set acyclic bond angles to heuristic values based on hybridization rules in order to reduce the degrees of freedom in 3D coordinate generation. Here, we describe how we fix these bonding geometries and explore whether this local 3D structure manipulation significantly alters the global molecular shape. Fixing bonding geometries. We fix acyclic bond distances by computing the mean bond distance between pairs of atom types across all the RDKit-generated conformers in our training set. After collecting these empirical mean values, we manually set each acyclic bond distance to its respective mean value for each conformer in our datasets. We set acyclic bond angles using simple hybridization rules. Specifically, sp3-hybridized atoms will have bond angles of 109.5◦, sp2-hybridized atoms will have bond angles of 120◦, and sp-hybridized atoms will have bond angles of 180◦. We manually fix the acyclic bond angles to these heuristic values for all conformers in our datasets. We use RDKit to determine the hybridization states of each atom. During generation, occasionally the hybridization of certain atoms (N, O) may change once they are bonded to new neighbors. For instance, an sp3 nitrogen can become sp2 once bonded to an aromatic ring. We adjust bond angles on-the-fly in these edge cases. Impact on global shape. Figure 11 plots the histogram of simS(Mfixed,Mrelaxed) for 1000 test set conformers Mfixed whose bonding geometries have been fixed, and the original RDKitgenerated conformers Mrelaxed with relaxed (true) bonding geometries. In the vast majority of cases, fixing the bonding geometries negligibly impacts the global shape of the 3D molecule (simS(Mfixed,Mrelaxed) ≈ 1). This is because the main factor influencing global molecular shape is rotatable bonds (e.g., flexible dihedrals), which are not altered by fixing bond distances and angles. Recovering refined bonding geometries. Even though fixing bond distances and angles only marginally impacts molecular shape, we still may wish to recover refined bonding geometries of the generated 3D molecules without altering the generated 3D shape. We can accomplish this (to a first approximation) for generated molecules by creating a geometrically relaxed conformation of the generated molecular graph with RDKit, and then manually setting the dihedrals of the rotatable bonds in the relaxed conformer to match the corresponding dihedrals in the generated conformers. Importantly, if we perform this relaxation procedure for both the dataset molecules and the SQUID-generated molecules, the (relaxed) generated molecules still have significantly enriched shape-similarity to the (relaxed) target shape compared to (relaxed) random molecules from the dataset (Fig. 12). A.9 ABLATING EQUIVARIANCE SQUID aligns the equivariant representations of the encoded target shape and the partially generated structures in order to generate 3D conformations that natively fit the target shape, without having to implicitly learn SE(3)-alignments (Challenge 2). We achieve this in Equation 7, where we mix the equivariant representations of MS and the partially generated structure M ′(c−1) l . To empirically motivate this design choice, we ablate the equivariant alignment by setting Z̃ = 0 in Eq. 7. We denote this ablated model as SQUID-NoEqui. Note that because we still pass the unablated invariant features z to the decoder (Eq. 8), SQUID-NoEqui is still conditioned on the shape of MS — the model simply no longer has access to any explicit information about the relative spatial orientation of M ′(c−1)l to MS (and thus must learn this spatial relationship from scratch). As expected, ablating SQUID’s equivariance significantly reduces SQUID’s ability to generate chemically diverse molecules that fit the target shape. Figure 13 plots the distributions of simS(M ′,MS) for the best of Nmax generated molecules with simG(M ′,MS) < 0.7 or 0.3 when using SQUID or SQUID-NoEqui. Crucially, the mean shape similarity when sampling with (λ = 1.0, Nmax = 20, simG(M ′,MS) < 0.7) decreases from 0.828 (SQUID) to 0.805 (SQUID-NoEqui). When sampling with (λ = 0.3, Nmax = 20, simG(M ′,MS) < 0.7), the mean shape similarity also decreases from 0.879 (SQUID) to 0.839 (SQUID-NoEqui). Relative to the mean shape similarity of 0.758 achieved by sampling random molecules from the dataset (Nmax = 20, simG(M ′,MS) < 0.7), this corresponds to a substantial 33% reduction in the shape-enrichment of SQUID-generated molecules. Interestingly, sampling (λ = 1.0, Nmax = 20, simG(M ′,MS) < 0.7) with SQUID-NoEqui still yields shape-enriched molecules compared to analogously sampling random molecules from the dataset (mean shape similarity of 0.805 vs. 0.758). This is because even without the equivariant feature alignment, SQUID-NoEqui still conditions molecular generation on the (invariant) encoding of the target shape S, and hence biases generation towards molecules which better fit the target shape (after alignment with ROCS). A.10 AUXILIARY TRAINING LOSSES We employ two auxiliary losses when training the graph generator in order to encourage the generated graphs to better fit the encoded target shape. The first auxiliary loss penalizes the graph generator if it adds an incorrect atom/fragment to the focus that is of significantly different size than the correct (ground truth) atom/fragment. We first compute a matrix ∆Vf ∈ R (|Lf |×|Lf |) + containing the (pairwise) volume difference between all atoms/fragments in the library Lf ∆V (i,j) f = |vfi − vfj | (15) where vfi is the volume of atom/fragment fi ∈ Lf (computed with RDKit). We then compute the auxiliary loss Lnext-shape as: Lnext-shape = 1 |Lf | (pnext ·∆V(g)f ) (16) where g is the index of the correct (ground truth) next atom/fragment fnext, true, ∆V (g) f is the gth row of ∆Vf , and pnext are the predicted probabilities over the next atom/fragment types to be connected to the focus (see Eq. 10). The second auxiliary loss penalizes the graph generator if it prematurely stops (local) generation, with larger penalties if the premature stop would result in larger portions of the (ground truth) graph not being generated. When predicting (local) stop tokens during graph generation (with teacher forcing), we compute the number of atoms in the subgraph induced by the subtree whose root treenode is the next atom/fragment to be added to the focus (in the current generation sequence). We then multiply the predicted probability for the local stop token by this number of “future” atoms that would not be generated if a premature stop token were generated. Hence, if the correct action is to indeed stop generation around the focus, the penalty will be zero. However, if the correct action is to add a large fragment to the current focus but the generator predicts a stop token, the penalty will be large. Formally, we compute: L∅-shape = p∅|GSTnext | if p∅, true = 0 otherwise 0 (17) where p∅, true is the ground truth action for local stopping (p∅, true = 0 indicates that the correct action is to not stop local generation), and GSTnext is the subgraph induced by the subtree whose root node is the next atom/fragment (to be generated) in the ground-truth molecular graph. A.11 OVERVIEW OF VECTOR NEURONS (VN) OPERATIONS In this work, we use Deng et al. (2021)’s VN-DGCNN to encode molecular point clouds into equivariant shape features. We also employ their general VN operations (VN-MLP, VN-Inv) during shape and chemical feature mixing. We refer readers to Deng et al. (2021) for a detailed description of these equivariant operations and models. Here, we briefly summarize some relevant VN-operations for the reader’s convenience. VN-MLP. Vector neurons (VN) lift scalar neuron features to vector features in R3. Hence, instead of having features x ∈ Rq , we have vector features X̃ ∈ Rq×3. While linear transformations are naturally equivariant to global rotations R since W(X̃R) = (WX̃)R for some rotation matrix R ∈ R3×3, Deng et al. (2021) construct a set of non-linear equivariant operations f̃ such that f̃(X̃R) = f̃(X̃)R, thereby enabling natively equivariant network design. VN-MLPs combine linear transformations with equivariant activations. In this work, we use VNLeakyReLU, which Deng et al. (2021) define as: VN-LeakyReLU(X̃;α) = αX̃+ (1− α)VN-ReLU(X̃) (18) where VN-ReLU(X̃) = x̃, if x̃ · k̃ ||k̃|| ≥ 0 x̃− (x̃ · k̃||k̃|| ) k̃ ||k̃|| otherwise ∀ x̃ ∈ X̃ (19) where k̃ = UX̃ for a learnable weight matrix U ∈ R1×q , and where x̃ ∈ R3. By composing series of linear transformations and equivariant activations, VN-MLPs map X̃ ∈ Rq×3 to X̃′ ∈ Rq′×3 such that X̃′R = VN-MLP(X̃R). VN-Inv. Deng et al. (2021) also define learnable operations that map equivariant features X̃ ∈ Rq×3 to invariant features x ∈ R3q . In general, VN-Inv constructs invariant features by multiplying equivariant features X̃ with other equivariant features Ỹ ∈ R3×3 : X̂ = X̃Ỹ⊤ (20) The invariant features X̂ ∈ Rq×3 can then be reshaped into standard invariant features x ∈ R3q . In our work, we slightly modify Deng et al. (2021)’s original formulation. Given a set of equivariant features X̃ = {X̃(i)} ∈ Rn×q×3, we define a VN-Inv as: VN-Inv(X̃) = X (21) where X = {x(i)} ∈ Rn×6q and: x(i) = Flatten(Ṽ(i)T̃⊤i ) (22) Ṽ(i) = (X̃(i), ∑ i X̃(i)) if n > 1 otherwise X̃(i) (23) T̃i = VN-MLP(Ṽ(i)) (24) where T̃i ∈ R3×3, and Ṽ(i) ∈ R2q×3 (n > 1) or Ṽ(i) ∈ Rq×3 (n = 1). VN-DGCNN. Deng et al. (2021) introduce VN-DGCNN as an SO(3)-equivariant version of the Dynamic Graph Convolutional Neural Network (Wang et al., 2019). Given a point cloud P ∈ Rn×3, VN-DGCNN uses (dynamic) equivariant edge convolutions to update equivariant per-point features: Ẽ(t+1)nm = VN-LeakyReLU (t)(Θ(t)(X̃(t)m − X̃(t)n ) +Φ(t)X̃(t)n ) (25) X̃(t+1)n = ∑ m∈KNNf (n) Ẽ(t+1)nm (26) where KNNf (n) are the k-nearest neighbors of point m in feature space, Φ(t) and Θ(t) are weight matrices, and X̃(t)n ∈ Rq×3 are the per-point equivariant features. A.12 GRAPH NEURAL NETWORKS In this work, we employ graph neural networks (GNNs) to encode: • each atom/fragment in the library Lf • the target molecule MS • each partial molecular structure M ′(c)l during sequential graph generation • the query structures M ′(ψfoc)l+1 when scoring rotatable bonds Our GNNs are loosely based upon a simple version of the EGNN (Satorras et al., 2022b). Given a molecular graph G with atoms as nodes and bonds as edges, we use graph convolutional layers defined by the following: mt+1ij = ϕ t m ( hti,h t j , ||ri − rj ||2,mtij ) (27) mt+1i = ∑ j∈N(i) mij (28) h (t=1) i = ϕ (0) h (h 0 i ,m (t=1) i ) (29) ht+1i = ϕ t h(h t i,m t+1 i ) + h t i (t > 0) (30) where hti are the learned atom embeddings at each GNN-layer, m t ij are learned (directed) messages, ri ∈ R3 are the coordinates of atom i, N(i) is the set of 1-hop bonded neighbors of atom i, and each ϕtm, ϕ t h are MLPs. Note that h 0 i are the initial atom features, and m 0 ij are the initial bond features for the bond between atoms i and j. In general, mtij ̸= mtji for t > 0, but here m0ij = m0ji. Note that since we only aggregate messages from directly bonded neighbors, ||ri − rj || only encodes bond distances, and does not encode any information about specific 3D conformations. Hence, our GNNs effectively only encode 2D chemical identity, as opposed to 3D shape. A.13 FRAGMENT LIBRARY. Our atom/fragment library Lf includes 100 distinct fragments (Fig. 14) and 24 unique atom types. The 100 fragments were selected based on the top-100 most frequently occurring fragments in our training set. In this work, we specify fragments as ring-containing substructures that do not contain any acyclic single bonds. However, in principle fragments could be any (valid) chemical substructure. Note that we only use 1 (geometrically optimized) conformation per fragment, which is assumed to be rigid. Hence, in its current implementation, SQUID does not consider different ring conformations (e.g., boat vs. chair conformations of cyclohexane). A.14 MODEL PARAMETERS Parameter sharing. For both the graph generator and the rotatable bond score, the (variational) molecule encoder (in the Encoder, Fig. 2) and the partial molecule encoder (in the Decoder, Fig. 2) share the same fragment encoder (Lf -GNN), which is trained end-to-end with the rest of the model. Apart from Lf -GNN, these encoders do not share any learnable parameters, despite having parallel architectures. The graph generator and the rotatable bond scorer are completely independent, and are trained separately. Hyperparameters. Tables 6 and 7 tabulate the set of hyperparameters used for SQUID across all the experiments conducted in this paper. Table 8 summarizes training and generation parameters, but we refer the reader to App. A.15 and A.16 for more detailed discussion of training and generation protocols. Because of the large hyperparameter search space and long training times, we did not perform extensive hyperparameter optimizations. We manually tuned the learning rates and schedulers to maintain training stability, and we maxed-out batch sizes given memory constraints. We set β∅-shape = 10 and βnext-shape = 10 to make the magnitudes of L∅-shape and Lnext-shape comparable to the other loss components for graph-generation. We slowly increase βKL over the course of training from 10−5 to a maximum of 10−1, which we found to provide a reasonable balance between LKL and graph reconstruction. Generation parameters A.15 ADDITIONAL TRAINING DETAILS Dataset. We use molecules from MOSES (Polykovskiy et al., 2020) to train, validate, and test SQUID. Starting from the train/test sets provided by MOSES, we first generate an RDKit conformer for each molecule, and remove any molecules for which we cannot generate a conformer. Conformers are initially created with the ETKDG algorithm in RDKit, and then separately optimized for 200 iterations with the MMFF force field. We then fix the acyclic bond distances and bond angles for each conformer (App. A.8). Using the molecules from MOSES’s train set, we then create the fragment library by extracting the top-100 most frequently occurring fragments (ring-containing substructures without acylic bonds). We separately generate a 3D conformer for each distinct fragment, optimizing the fragment structures with MMFF for 1000 steps. Given these 100 fragments, we then remove all molecules from the train and test sets containing non-included fragments. From the filtered training set, we then extract 24 unique atom types, which we add to the atom/fragment library Lf . We remove any molecule in the test set that contains an atom type not included in these 24. Finally, we randomly split the (filtered) training set into separate training/validation splits. The training split contains 1058352 molecules, the validation split contains 264589 molecules, and the test set contains 146883 molecules. Each molecule has one conformer. Collecting training data for graph generation and scoring. We individually supervise each step of autoregressive graph generation and use teacher forcing. We collect the ground-truth generation actions by representing each molecular graph as a tree whose root tree-node is either a terminal atom or a terminal fragment in the graph. A “terminal” atom is only bonded to one neighboring atom. A “terminal” fragment has only one acyclic (rotatable) bond to a neighboring atom/fragment. Starting from this terminal atom/fragment, we construct the molecule according to a breadth-firstsearch traversal of the generation tree (see Fig. 2); we break ties using RDKit’s canonical atom ordering. We augment the data by enumerating all generation trees starting from each possible terminal atom/fragment in the molecule. For each rotatable bond in the generation trees, we collect regression targets for training the scorer by following the procedure outlined in App. A.2. Batching. When training the graph generator, we batch together graph-generative actions which are part of the same generation sequence (e.g., generating G′(c)l from G ′(c−1) l ). Otherwise, generation sequences are treated independently. When training the rotatable bond scorer, we batch together different query dihedrals ψfoc of the same focal bond. Rather than scoring all 36 rotation angles in the same batch, we include the ground-truth rotation angle and randomly sample 9/35 others to include in the batch. Within each batch (for both graph-generation and scoring), all the encoded molecules MS are constrained to have the same number of atoms, and all the partial molecular structures G ′(c) l are constrained to have the same number of atoms. This restriction on batch composition is purely for convenience: the public implementation of VN-DGCNN from Deng et al. (2021) is designed to train on point clouds with the same number of points, and we construct point clouds by sampling a (fixed) np points for each atom. Training setup. We train the graph generator and the rotatable bond scorer separately. For the graph generator, we train for 2M iterations (batches), with a maximum batch size of 400 (generation sequences). We use the Adam optimizer with default parameters. We use an initial learning rate of 2.5 × 10−4, which we exponentially decay by a factor of 0.9 every 50K iterations to a minimum of 5 × 10−6. We weight the auxiliary losses by βnext-shape = 10.0 and β∅-shape = 10.0. We log-linearly increase βKL from 10−5 to 10−1 over the first 1M iterations, after which it remains constant at 10−1. For each generation sequence, we randomize the rotation angle of the bond connecting the focus to the rest of the partial graph (e.g., the focal dihedral), as this dihedral has yet to be scored. In order to make the graph generator more robust to imperfect rotatable bond scoring at generation time, during training, we perturb the dihedrals of each rotatable bond in the partially generated structure M ′l by δψ ∼ N(µ = 0◦, σ = 15◦) while fixing the coordinates of the focus. For the rotatable bond scorer, we train for 2M iterations (baches), with a maximum batch size of 32 (focal
1. What is the focus and contribution of the paper on de novo drug design? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and potential applications? 3. What are the weaknesses of the paper regarding its comparisons with other works and its technical advantages? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In the proposed work, the authors present a method for de novo drug design conditioned on specific pharmacophores. They do so by using an autoregressive model similar to MoLeR (Maziarz et al 2021) and cleverly extending it to the 3D dimensional case with equivariant/invariant networks. Strengths And Weaknesses Strengths: Manuscript very well written First approach of its kind, where the goal is to directly generate a 3D dimensional molecule end-to-end conditioned on a specific pharmacophore. Methodology shows potential for scaffold-hopping. Accompanying code to facilitate method testing Weaknesses: Evaluation limited to the proposed method itself. While the proposed method is currently unique in the sense that it's fully end-to-end, the authors could have compared to other "multi-step" approaches such as the ones proposed by Skalic et al. (2019) or Imrie et al. (2021), even if the latter require additional conformation generation / alignment. From a technical point of view the model is novel but its applicability advantage compared to previous approaches remains unclear. Clarity, Quality, Novelty And Reproducibility Clarity: Overall, the manuscript is well-written and to a sufficient level of detail so that it can be replicated. The Appendix is also quite thorough in regards to providing sufficient context for sections that for page limitations could not be extended appropriately. Quality and novelty: As mentioned in the strength & weaknesses section, to my belief, this is the first method of its kind for the task of equivariantly generating molecules end-to-end conditioned on a specific pharmacophore and therefore deserves publication. In terms of quality, I believe that the manuscript could benefit from additional evaluations, particularly considering that there are other more basic techniques (e.g. 3D CNNs captioning networks that have been used for this same task in the past) Reproducibility: The authors have provided supplementary code to reproduce most of the results provided in the manuscript.
ICLR
Title Redesigning the Classification Layer by Randomizing the Class Representation Vectors Abstract Neural image classification models typically consist of two components. The first is an image encoder, which is responsible for encoding a given raw image into a representative vector. The second is the classification component, which is often implemented by projecting the representative vector onto target class vectors. The target class vectors, along with the rest of the model parameters, are estimated so as to minimize the loss function. In this paper, we analyze how simple design choices for the classification layer affect the learning dynamics. We show that the standard cross-entropy training implicitly captures visual similarities between different classes, which might deteriorate accuracy or even prevents some models from converging. We propose to draw the class vectors randomly and set them as fixed during training, thus invalidating the visual similarities encoded in these vectors. We analyze the effects of keeping the class vectors fixed and show that it can increase the inter-class separability, intra-class compactness, and the overall model accuracy, while maintaining the robustness to image corruptions and the generalization of the learned concepts. 1 INTRODUCTION Deep learning models achieved breakthroughs in classification tasks, allowing setting state-of-theart results in various fields such as speech recognition (Chiu et al., 2018), natural language processing (Vaswani et al., 2017), and computer vision (Huang et al., 2017). In image classification task, the most common approach of training the models is as follows: first, a convolutional neural network (CNN) is used to extract a representative vector, denoted here as image representation vector (also known as the feature vector). Then, at the classification layer, this vector is projected onto a set of weight vectors of the different target classes to create the class scores, as depicted in Fig. 1. Last, a softmax function is applied to normalize the class scores. During training, the parameters of both the CNN and the classification layer are updated to minimize the cross-entropy loss. We refer to this procedure as the dot-product maximization approach since such training ends up maximizing the dot-product between the image representation vector and the target weight vector. Recently, it was demonstrated that despite the excellent performance of the dot-product maximization approach, it does not necessarily encourage discriminative learning of features, nor does it enforce the intra-class compactness and inter-class separability (Liu et al., 2016; Wang et al., 2017; Liu et al., 2017). The intra-class compactness indicates how close image representations from the same class relate to each other, whereas the inter-class separability indicates how far away image representations from different classes are. Several works have proposed different approaches to address these caveats (Liu et al., 2016; 2017; Wang et al., 2017; 2018b;a). One of the most effective yet most straightforward solutions that were proposed is NormFace (Wang et al., 2017), where it was suggested to maximize the cosine-similarity between vectors by normalizing both the image and class vectors. However, the authors found when minimizing the cosine-similarity directly, the models fail to converge, and hypothesized that the cause is due to the bounded range of the logits vector. To allow convergence, the authors added a scaling factor to multiply the logits vector. This approach has been widely adopted by multiple works (Wang et al., 2018b; Wojke & Bewley, 2018; Deng et al., 2019; Wang et al., 2018a; Fan et al., 2019). Here we will refer to this approach as the cosine-similarity maximization approach. This paper is focused on redesigning the classification layer, and the its role while kept fixed during training. We show that the visual similarity between classes is implicitly captured by the class vectors when they are learned by maximizing either the dot-product or the cosine-similarity between the image representation vector and the class vectors. Then we show that the class vectors of visually similar categories are close in their angle in the space. We investigate the effects of excluding the class vectors from training and simply drawing them randomly distributed over a hypersphere. We demonstrate that this process, which eliminates the visual similarities from the classification layer, boosts accuracy, and improves the inter-class separability (using either dot-product maximization or cosine-similarity maximization). Moreover, we show that fixing the class representation vectors can solve the issues preventing from some cases to converge (under the cosine-similarity maximization approach), and can further increase the intra-class compactness. Last, we show that the generalization to the learned concepts and robustness to noise are both not influenced by ignoring the visual similarities encoded in the class vectors. Recent work by Hoffer et al. (2018), suggested to fix the classification layer to allow increased computational and memory efficiencies. The authors showed that the performance of models with fixed classification layer are on par or slightly drop (up to 0.5% in absolute accuracy) when compared to models with non-fixed classification layer. However, this technique allows substantial reduction in the number of learned parameters. In the paper, the authors compared the performance of dot-product maximization models with a non-fixed classification layer against the performance of cosine-similarity maximization models with a fixed classification layer and integrated scaling factor. Such comparison might not indicate the benefits of fixing the classification layer, since the dotproduct maximization is linear with respect to the image representation while the cosine-similarity maximization is not. On the other hand, in our paper, we compare fixed and non-fixed dot-product maximization models as well as fixed and non-fixed cosine-maximization models, and show that by fixing the classification layer the accuracy might boost by up to 4% in absolute accuracy. Moreover, while cosine-maximization models were suggested to improve the intra-class compactness, we reveal that by integrating a scaling factor to multiply the logits, the intra-class compactness is decreased. We demonstrate that by fixing the classification layer in cosine-maximization models, the models can converge and achieve a high performance without the scaling factor, and significantly improve their intra-class compactness. The outline of this paper is as follows. In Sections 2 and 3, we formulate dot-product and cosinesimilarity maximization models, respectively, and analyze the effects of fixing the class vectors. In Section 4, we describe the training procedure, compare the learning dynamics, and asses the generalization and robustness to corruptions of the evaluated models. We conclude the paper in Section 5. 2 FIXED DOT-PRODUCT MAXIMIZATION Assume an image classification task with m possible classes. Denote the training set of N examples by S = {(xi, yi)}Ni=1, where xi ∈ X is the i-th instance, and yi is the corresponding class such that yi ∈ {1, ...,m}. In image classification a dot-product maximization model consists of two parts. The first is the image encoder, denoted as fθ : X → Rd, which is responsible for representing the input image as a d-dimensional vector, fθ(x) ∈ Rd, where θ is a set of learnable parameters. The second part of the model is the classification layer, which is composed of learnable parameters denoted as W ∈ Rm×d. Matrix W can be viewed as m vectors, w1, . . . , wm, where each vector wi ∈ Rd can be considered as the representation vector associated with the i-th class. For simplicity, we omitted the bias terms and assumed they can be included in W . A consideration that is taken when designing the classification layer is choosing the operation applied between the matrix W and the image representation vector fθ(x). Most commonly, a dotproduct operation is used, and the resulting vector is referred to as the logits vector. For training the models, a softmax operation is applied over the logits vector, and the result is given to a crossentropy loss which should be minimized. That is, argmin w1,...,wm,θ N∑ i=0 − log e wyi ·fθ(xi)∑m j=1 e wj ·fθ(xi) = argmin w1,...,wm,θ N∑ i=0 − log e ‖wyi‖ ‖fθ(xi)‖ cos(αyi )∑m j=1 e ‖wj‖ ‖fθ(xi)‖ cos(αj) . (1) The equality holds since wyi ·fθ(xi) = ‖wyi‖‖fθ(xi)‖ cos(αyi), where αk is the angle between the vectors wk and fθ(xi). We trained three dot-product maximization models with different known CNN architectures over four datasets, varying in image size and number of classes, as described in detail in Section 4.1. Since these models optimize the dot-product between the image vector and its corresponding learnable class vectors, we refer to these models as non-fixed dot-product maximization models. Inspecting the matrix W of the trained models reveals that visually similar classes have their corresponding class vectors close in space. On the left panel of Fig. 2, we plot the cosine-similarity between the class vectors that were learned by the non-fixed model which was trained on the STL10 dataset. It can be seen that the vectors representing vehicles are relatively close to each other, and far away from vectors representing animals. Furthermore, when we inspect the class vectors of non-fixed models trained on CIFAR-100 (100 classes) and Tiny ImageNet (200 classes), we find even larger similarities between vectors due to the high visual similarities between classes, such as boy and girl or apple and orange. By placing the vectors of visually similar classes close to each other, the inter-class separability is decreased. Moreover, we find a strong spearman correlation between the distance of class vectors and the number of misclassified examples. On the right panel of Fig. 2, we plot the cosine-similarity between two class vectors, wi and wj , against the number of examples from category i that were wrongly classified as category j. As shown in the figure, as the class vectors are closer in space, the number of misclassifications increases. In STL-10, CIFAR-10, CIFAR-100, and Tiny ImageNet, we find a correlation of 0.82, 0.77, 0.61, and 0.79, respectively (note that all possible class pairs were considered in the computation of the correlation). These findings reveal that as two class vectors are closer in space, the confusion between the two corresponding classes increases. We examined whether the models benefit from the high angular similarities between the vectors. We trained the same models, but instead of learning the class vectors, we drew them randomly, normalized them (‖wj‖ = 1), and kept them fixed during training. We refer to these models as the fixed dot-product maximization models. Since the target vectors are initialized randomly, the cosine-similarity between vectors is low even for visually similar classes. See the middle panel of Fig. 2. Notice that by fixing the class vectors and bias term during training, the model can minimize the loss in Eq. 1 only by optimizing the vector fθ(xi). It can be seen that by fixing the class vectors, the prediction is influenced mainly by the angle between fθ and the fixed wyi since the magnitude of fθ(xi) is multiplied with all classes and the magnitude of each class vectors is equal and set to 1. Thus, the model is forced to optimize the angle of the image vector towards its randomized class vector. Table 1 compares the classification accuracy of models with a fixed and non-fixed classification layer. Results suggest that learning the matrix W during training is not necessarily beneficial, and might reduce accuracy when the number of classes is high, or when the classes are visually close. Additionally, we empirically found that models with fixed class vectors can be trained with higher learning rate, due to space limitation we bring the results in the appendix (Table 7, Table 8, Table 9). By randomly drawing the class vectors, we ignore possible visual similarities between classes and force the models to minimize the loss by increasing the inter-class separability and encoding images from visually similar classes into vectors far in space, see Fig. 3. 3 FIXED COSINE-SIMILARITY MAXIMIZATION Recently, cosine-similarity maximization models were proposed by Wang et al. (2017) for face verification task. The authors maximized the cosine-similarity, rather than the dot-product, between the image vector and its corresponding class vector. That is, argmin w1,...,wm,θ N∑ i=0 − log e cos(αyi )∑m j=1 e cos(αj) = argmin w1,...,wm,θ N∑ i=0 − log e wyi ·fθ(xi) ‖wyi‖‖fθ(xi)‖∑m j=1 e wj ·fθ(xi) ‖wj‖‖fθ(xi)‖ (2) Comparing the right-hand side of Eq. 2 with Eq. 1 shows that the cosine-similarity maximization model simply requires normalizing fθ(x), and each of the class representation vectors w1, ..., wm, by dividing them with their l2-norm during the forward pass. The main motivation for this reformulation is the ability to learn more discriminative features in face verification by encouraging intra-class compactness and enlarging the inter-class separability. The authors showed that dot-product maximization models learn radial feature distribution; thus, the inter-class separability and intra-class compactness are not optimal (for more details, see the discussion in Wang et al. (2017)). However, the authors found that cosine-similarity maximization models as given in Eq. 2 fail to converge and added a scaling factor S ∈ R to multiply the logits vector as follows: argmin w1,...,wm,θ N∑ i=0 − log e S·cos(αyi )∑m j=1 e S·cos(αj) (3) This reformulation achieves improved results for face verification task, and many recent alternations also integrated the scaling factor S for convergences when optimizing the cosine-similarity Wang et al. (2018b); Wojke & Bewley (2018); Deng et al. (2019); Wang et al. (2018a); Fan et al. (2019). According to Wang et al. (2017), cosine-similarity maximization models fail to converge when S = 1 due to the low range of the logits vector, where each cell is bounded between [−1, 1]. This low range prevents the predicted probabilities from getting close to 1 during training, and as a result, the distribution over target classes is close to uniform, thus the loss will be trapped at a very high value on the training set. Intuitively, this may sound a reasonable explanation as to why directly maximizing the cosine-similarity fails to converge (S = 1). Note that even if an example is correctly classified and well separated, in its best scenario, it will achieve a cosine-similarity of 1 with its ground-truth class vector, while for other classes, the cosine-similarity would be (−1). Thus, for a classification task with m classes, the predicted probability for the example above would be: P (Y = yi|xi) = e1 e1 + (m− 1) · e−1 (4) Notice that if the number of classes m = 200, the predicted probability of the correctly classified example would be at most 0.035, and cannot be further optimized to 1. As a result, the loss function would yield a high value for a correctly classified example, even if its image vector is placed precisely in the same direction as its ground-truth class vector. As in the previous section, we trained the same models over the same datasets, but instead of optimizing the dot-product, we optimized the cosine-similarity by normalizing fθ(xi) and w1, ..., wm at the forward pass. We denote these models as non-fixed cosine-similarity maximization models. Additionally, we trained the same cosine-similarity maximization models with fixed random class vectors, denoting these models as fixed cosine-similarity maximization. In all models (fixed and non-fixed) we set S = 1 to directly maximize the cosine-similarity, results are shown in Table 2. Surprisingly, we reveal that the low range of the logits vector is not the cause preventing from cosine-similarity maximization models from converging. As can be seen in the table, fixed cosine-maximization models achieve significantly high accuracy results by up to 53% compared to non-fixed models. Moreover, it can be seen that fixed cosine-maximization models with S = 1 can also outperform dot-product maximization models. This finding demonstrates that while the logits are bounded between [−1, 1], the models can still learn high-quality representations and decision boundaries. We further investigated the effects of S and train for comparison the same fixed and non-fixed models, but this time we used grid-search for the best performing S value. As can be seen in Table 3, increasing the scaling factor S allows non-fixed models to achieve higher accuracies over all datasets. Yet, there is no benefit at learning the class representation vectors instead of randomly drawing them and fixing them during training when considering models’ accuracies. To better understand the cause which prevents non-fixed cosine-maximization models from converging when S = 1, we compared these models with the same models trained by setting the optimal S scalar. For each model we measured the distance between its learned class vectors and compared these distances to demonstrate the effect of S on them. Interestingly, we found that as S increased, the cosine-similarity between the class vectors decreased. Meaning that by increasing S the class vectors are further apart from each other. Compare, for example, the left and middle panels in Fig. 4, which show the cosine-similarity between the class vectors of models trained on STL with S = 1 and S = 20, respectively. On the right panel in Fig. 4, we plot the number of misclassification as a function of the cosinesimilarity between the class vectors of the non-fixed cosine-maximization model trained on STL10 with S = 1. It can be seen that the confusion between classes is high when they have low angular distance between them. As in previous section, we observed strong correlations between the closeness of the class vectors and the number of misclassification. We found correlations of 0.85, 0.87, 0.81, and 0.83 in models trained on STL-10, CIFAR-10, CIFAR-100, and Tiny ImageNet, respectively. By integrating the scaling factor S in Eq. 4 we get P (Y = yi|xi) = eS·1 eS·1 + (m− 1) · eS·(−1) (5) Note that by increasing S, the predicted probability in Eq. 5 increases. This is true even when the cosine-similarity between f(xi) and wyi is less than 1. When S is set to a large value, the gap between the logits increases, and the predicted probability after the softmax is closer to 1. As a result, the model is discouraged from optimizing the cosine-similarity between the image representation and its ground-truth class vector to be close to 1, since the loss is closer to 0. In Table 4, we show that as we increase S, the cosine-similarity between the image vectors and their predicted class vectors decreases. These observations can provide an explanation as to why non-fixed models with S = 1 fail to converge. By setting S to a large scalar, the image vectors are spread around their class vectors with a larger degree, preventing the class vectors from getting close to each other. As a result, the interclass separability increases and the misclassification rate between visually similar classes decreases. In contrast, setting S = 1 allows models to place the class vectors of visually similar classes closer in space and leads to a high number of misclassification. However, a disadvantage of increasing S and setting it to a large number is that the intra-class compactness is violated since image vectors from the same class are spread and encoded relatively far from each other, see Fig. 5. Fixed cosine-maximization models successfully converge when S = 1, since the class vectors are initially far in space from each other. By randomly drawing the class vectors, models are required to encode images from visually similar classes into vectors, which are far in space; therefore, the inter-class separability is high. Additionally, the intra-class compactness is improved since models are encouraged to maximize the cosine-similarity to 1 as S can be set to 1, and place image vectors from the same class close to their class vector. We validated this empirically by measuring the average cosine-similarity between image vectors and their predicted classes’ vectors in fixed cosinemaximization models with S = 1. We obtained an average cosine-similarity of roughly 0.95 in all experiments, meaning that images from the same class were encoded compactly near their class vectors. In conclusion, although non-fixed cosine-similarity maximization models were proposed to improve the caveats of dot-product maximization by improving the inter-class separability and intra-class compactness, their performance are significantly low without the integration of a scaling factor to multiply the logits vector. Integrating the scaling factor and setting it to S > 1 decrease intra-class compactness and introduce a trade-off between accuracy and intra-class compactness. By fixing the class vectors, cosine-similarity maximization models can have both high performance and improved intra-class compactness. Meaning that multiple previous works (Wang et al. (2018b); Wojke & Bewley (2018); Deng et al. (2019); Wang et al. (2018a); Fan et al. (2019)) that adopted the cosinemaximization method and integrated a scaling factor for convergence, might benefit from improved results by fixing the class vectors. 4 GENERALIZATION AND ROBUSTNESS TO CORRUPTIONS In this section we explore the generalization of the evaluated models to the learned concepts and measure their robustness to image corruptions. We do not aim to set a state-of-the-art results but rather validate that by fixing the class vectors of a model, the model’s generalization ability and robustness to corruptions remain competitive. 4.1 TRAINING PROCEDURE To evaluate the impact of ignoring the visual similarities in the classification layer we evaluated the models on CIFAR-10, CIFAR-100 Krizhevsky et al. (2009), STL Coates et al. (2011), and Tiny ImageNet1 (containing 10, 100, 10, and 200 classes, respectively). For each dataset, we trained Resnet18 He et al. (2016a), PreActResnet18 He et al. (2016b), and MobileNetV2 Sandler et al. (2018) models 1https://tiny-imagenet.herokuapp.com/ with fixed and non-fixed class vectors. All models were trained using stochastic gradient descent with momentum. We used the standard normalization and data augmentation techniques. Due to space limitations, the values of the hyperparameters used for training the models can be found under our code repository. We normalized the randomly drawn, fixed class representation vectors by dividing them with their l2-norm. All reported results are the averaged results of 3 runs. 4.2 GENERALIZATION For measuring how well the models were able to generalize to the learned concepts, we evaluated them on images containing objects from the same target classes appearing in their training dataset. For evaluating the models trained on STL-10 and CIFAR-100, we manually collected 2000 and 6000 images ,respectively, from the publicly available dataset Open Images V4 Krasin et al. (2017). For CIFAR-10 we used the CIFAR-10.1 dataset Recht et al. (2018). All collected sets contain an equal number of images for each class. We omitted models trained on Tiny ImageNet from the evaluation since we were not able to collect images for all classes appearing in this set. Table 5 summarizes the results for all the models. Results suggest that excluding the class representation vectors from training, does not decrease the generalization to learned concepts. 4.3 ROBUSTNESS TO CORRUPTIONS Next, we verified that excluding the class vectors from training did not decrease the model’s robustness to image corruptions. For this we apply three types of algorithmically generated corruptions on the test set and evaluate the accuracy of the models on these sets. The corruptions we apply are impulse-noise, JPEG compression, and de-focus blur. Corruptions are generated using Jung (2018), and available under our repository. Results, as shown in Table 6, suggest that randomly drawn fixed class vectors allow models to be highly robust to image corruptions. 5 CONCLUSION In this paper, we propose randomly drawing the parameters of the classification layer and excluding them from training. We showed that by this, the inter-class separability, intra-class compactness, and the overall accuracy of the model can improve when maximizing the dot-product or the cosine similarity between the image representation and the class vectors. We analyzed the cause that prevents the non-fixed cosine-maximization models from converging. We also presented the generalization abilities of the fixed and not-fixed classification layer.
1. What is the main contribution of the paper regarding multi-class image classification using neural networks? 2. What are the weaknesses of the paper's experimental design and analysis? 3. Do you have any concerns or suggestions for improving the research methodology? 4. How does the reviewer assess the novelty and potential impact of the claimed finding? 5. Are there any questions or suggestions for further investigation related to the topic?
Review
Review This paper claims that in the context of multi-class image classification using neural networks, in the final classification layer, if we use randomly initialized parameters (with normalization) without any training, we can achieve better performance than if we train those parameters. This is an intriguing claim that can potentially have a very broad impact. The authors provide some motivations based on the error-similarity plots, but no theoretical backing. Without convincing theoretical support, such a claim can only be established through extensive and rigorous experimentation, and I find the experiment description in this paper is short on delivering strong evidence. For example, how many runs to achieve the results in Tables 1-3? What are confidence intervals on the results? Any statistical significance test done? How were hyperparameters selected? What about the performance on the ImageNet dataset, which has more classes than the datasets reported in the paper? What distribution was used to initialize the random weights in the classification layer? Is the performance sensitive to the distribution? Is the performance sensitive to the complexity of the model used to learn the representation? How does this compare to other ways of improve multi-class classification such as softmax temperature annealing, label smoothing, adding regularization, etc.? Or as a stretch, does this claim generalize to problems with categorical features? Details: page 2, line 9: do you mean "maximizing the cosine-similarity"?
ICLR
Title Redesigning the Classification Layer by Randomizing the Class Representation Vectors Abstract Neural image classification models typically consist of two components. The first is an image encoder, which is responsible for encoding a given raw image into a representative vector. The second is the classification component, which is often implemented by projecting the representative vector onto target class vectors. The target class vectors, along with the rest of the model parameters, are estimated so as to minimize the loss function. In this paper, we analyze how simple design choices for the classification layer affect the learning dynamics. We show that the standard cross-entropy training implicitly captures visual similarities between different classes, which might deteriorate accuracy or even prevents some models from converging. We propose to draw the class vectors randomly and set them as fixed during training, thus invalidating the visual similarities encoded in these vectors. We analyze the effects of keeping the class vectors fixed and show that it can increase the inter-class separability, intra-class compactness, and the overall model accuracy, while maintaining the robustness to image corruptions and the generalization of the learned concepts. 1 INTRODUCTION Deep learning models achieved breakthroughs in classification tasks, allowing setting state-of-theart results in various fields such as speech recognition (Chiu et al., 2018), natural language processing (Vaswani et al., 2017), and computer vision (Huang et al., 2017). In image classification task, the most common approach of training the models is as follows: first, a convolutional neural network (CNN) is used to extract a representative vector, denoted here as image representation vector (also known as the feature vector). Then, at the classification layer, this vector is projected onto a set of weight vectors of the different target classes to create the class scores, as depicted in Fig. 1. Last, a softmax function is applied to normalize the class scores. During training, the parameters of both the CNN and the classification layer are updated to minimize the cross-entropy loss. We refer to this procedure as the dot-product maximization approach since such training ends up maximizing the dot-product between the image representation vector and the target weight vector. Recently, it was demonstrated that despite the excellent performance of the dot-product maximization approach, it does not necessarily encourage discriminative learning of features, nor does it enforce the intra-class compactness and inter-class separability (Liu et al., 2016; Wang et al., 2017; Liu et al., 2017). The intra-class compactness indicates how close image representations from the same class relate to each other, whereas the inter-class separability indicates how far away image representations from different classes are. Several works have proposed different approaches to address these caveats (Liu et al., 2016; 2017; Wang et al., 2017; 2018b;a). One of the most effective yet most straightforward solutions that were proposed is NormFace (Wang et al., 2017), where it was suggested to maximize the cosine-similarity between vectors by normalizing both the image and class vectors. However, the authors found when minimizing the cosine-similarity directly, the models fail to converge, and hypothesized that the cause is due to the bounded range of the logits vector. To allow convergence, the authors added a scaling factor to multiply the logits vector. This approach has been widely adopted by multiple works (Wang et al., 2018b; Wojke & Bewley, 2018; Deng et al., 2019; Wang et al., 2018a; Fan et al., 2019). Here we will refer to this approach as the cosine-similarity maximization approach. This paper is focused on redesigning the classification layer, and the its role while kept fixed during training. We show that the visual similarity between classes is implicitly captured by the class vectors when they are learned by maximizing either the dot-product or the cosine-similarity between the image representation vector and the class vectors. Then we show that the class vectors of visually similar categories are close in their angle in the space. We investigate the effects of excluding the class vectors from training and simply drawing them randomly distributed over a hypersphere. We demonstrate that this process, which eliminates the visual similarities from the classification layer, boosts accuracy, and improves the inter-class separability (using either dot-product maximization or cosine-similarity maximization). Moreover, we show that fixing the class representation vectors can solve the issues preventing from some cases to converge (under the cosine-similarity maximization approach), and can further increase the intra-class compactness. Last, we show that the generalization to the learned concepts and robustness to noise are both not influenced by ignoring the visual similarities encoded in the class vectors. Recent work by Hoffer et al. (2018), suggested to fix the classification layer to allow increased computational and memory efficiencies. The authors showed that the performance of models with fixed classification layer are on par or slightly drop (up to 0.5% in absolute accuracy) when compared to models with non-fixed classification layer. However, this technique allows substantial reduction in the number of learned parameters. In the paper, the authors compared the performance of dot-product maximization models with a non-fixed classification layer against the performance of cosine-similarity maximization models with a fixed classification layer and integrated scaling factor. Such comparison might not indicate the benefits of fixing the classification layer, since the dotproduct maximization is linear with respect to the image representation while the cosine-similarity maximization is not. On the other hand, in our paper, we compare fixed and non-fixed dot-product maximization models as well as fixed and non-fixed cosine-maximization models, and show that by fixing the classification layer the accuracy might boost by up to 4% in absolute accuracy. Moreover, while cosine-maximization models were suggested to improve the intra-class compactness, we reveal that by integrating a scaling factor to multiply the logits, the intra-class compactness is decreased. We demonstrate that by fixing the classification layer in cosine-maximization models, the models can converge and achieve a high performance without the scaling factor, and significantly improve their intra-class compactness. The outline of this paper is as follows. In Sections 2 and 3, we formulate dot-product and cosinesimilarity maximization models, respectively, and analyze the effects of fixing the class vectors. In Section 4, we describe the training procedure, compare the learning dynamics, and asses the generalization and robustness to corruptions of the evaluated models. We conclude the paper in Section 5. 2 FIXED DOT-PRODUCT MAXIMIZATION Assume an image classification task with m possible classes. Denote the training set of N examples by S = {(xi, yi)}Ni=1, where xi ∈ X is the i-th instance, and yi is the corresponding class such that yi ∈ {1, ...,m}. In image classification a dot-product maximization model consists of two parts. The first is the image encoder, denoted as fθ : X → Rd, which is responsible for representing the input image as a d-dimensional vector, fθ(x) ∈ Rd, where θ is a set of learnable parameters. The second part of the model is the classification layer, which is composed of learnable parameters denoted as W ∈ Rm×d. Matrix W can be viewed as m vectors, w1, . . . , wm, where each vector wi ∈ Rd can be considered as the representation vector associated with the i-th class. For simplicity, we omitted the bias terms and assumed they can be included in W . A consideration that is taken when designing the classification layer is choosing the operation applied between the matrix W and the image representation vector fθ(x). Most commonly, a dotproduct operation is used, and the resulting vector is referred to as the logits vector. For training the models, a softmax operation is applied over the logits vector, and the result is given to a crossentropy loss which should be minimized. That is, argmin w1,...,wm,θ N∑ i=0 − log e wyi ·fθ(xi)∑m j=1 e wj ·fθ(xi) = argmin w1,...,wm,θ N∑ i=0 − log e ‖wyi‖ ‖fθ(xi)‖ cos(αyi )∑m j=1 e ‖wj‖ ‖fθ(xi)‖ cos(αj) . (1) The equality holds since wyi ·fθ(xi) = ‖wyi‖‖fθ(xi)‖ cos(αyi), where αk is the angle between the vectors wk and fθ(xi). We trained three dot-product maximization models with different known CNN architectures over four datasets, varying in image size and number of classes, as described in detail in Section 4.1. Since these models optimize the dot-product between the image vector and its corresponding learnable class vectors, we refer to these models as non-fixed dot-product maximization models. Inspecting the matrix W of the trained models reveals that visually similar classes have their corresponding class vectors close in space. On the left panel of Fig. 2, we plot the cosine-similarity between the class vectors that were learned by the non-fixed model which was trained on the STL10 dataset. It can be seen that the vectors representing vehicles are relatively close to each other, and far away from vectors representing animals. Furthermore, when we inspect the class vectors of non-fixed models trained on CIFAR-100 (100 classes) and Tiny ImageNet (200 classes), we find even larger similarities between vectors due to the high visual similarities between classes, such as boy and girl or apple and orange. By placing the vectors of visually similar classes close to each other, the inter-class separability is decreased. Moreover, we find a strong spearman correlation between the distance of class vectors and the number of misclassified examples. On the right panel of Fig. 2, we plot the cosine-similarity between two class vectors, wi and wj , against the number of examples from category i that were wrongly classified as category j. As shown in the figure, as the class vectors are closer in space, the number of misclassifications increases. In STL-10, CIFAR-10, CIFAR-100, and Tiny ImageNet, we find a correlation of 0.82, 0.77, 0.61, and 0.79, respectively (note that all possible class pairs were considered in the computation of the correlation). These findings reveal that as two class vectors are closer in space, the confusion between the two corresponding classes increases. We examined whether the models benefit from the high angular similarities between the vectors. We trained the same models, but instead of learning the class vectors, we drew them randomly, normalized them (‖wj‖ = 1), and kept them fixed during training. We refer to these models as the fixed dot-product maximization models. Since the target vectors are initialized randomly, the cosine-similarity between vectors is low even for visually similar classes. See the middle panel of Fig. 2. Notice that by fixing the class vectors and bias term during training, the model can minimize the loss in Eq. 1 only by optimizing the vector fθ(xi). It can be seen that by fixing the class vectors, the prediction is influenced mainly by the angle between fθ and the fixed wyi since the magnitude of fθ(xi) is multiplied with all classes and the magnitude of each class vectors is equal and set to 1. Thus, the model is forced to optimize the angle of the image vector towards its randomized class vector. Table 1 compares the classification accuracy of models with a fixed and non-fixed classification layer. Results suggest that learning the matrix W during training is not necessarily beneficial, and might reduce accuracy when the number of classes is high, or when the classes are visually close. Additionally, we empirically found that models with fixed class vectors can be trained with higher learning rate, due to space limitation we bring the results in the appendix (Table 7, Table 8, Table 9). By randomly drawing the class vectors, we ignore possible visual similarities between classes and force the models to minimize the loss by increasing the inter-class separability and encoding images from visually similar classes into vectors far in space, see Fig. 3. 3 FIXED COSINE-SIMILARITY MAXIMIZATION Recently, cosine-similarity maximization models were proposed by Wang et al. (2017) for face verification task. The authors maximized the cosine-similarity, rather than the dot-product, between the image vector and its corresponding class vector. That is, argmin w1,...,wm,θ N∑ i=0 − log e cos(αyi )∑m j=1 e cos(αj) = argmin w1,...,wm,θ N∑ i=0 − log e wyi ·fθ(xi) ‖wyi‖‖fθ(xi)‖∑m j=1 e wj ·fθ(xi) ‖wj‖‖fθ(xi)‖ (2) Comparing the right-hand side of Eq. 2 with Eq. 1 shows that the cosine-similarity maximization model simply requires normalizing fθ(x), and each of the class representation vectors w1, ..., wm, by dividing them with their l2-norm during the forward pass. The main motivation for this reformulation is the ability to learn more discriminative features in face verification by encouraging intra-class compactness and enlarging the inter-class separability. The authors showed that dot-product maximization models learn radial feature distribution; thus, the inter-class separability and intra-class compactness are not optimal (for more details, see the discussion in Wang et al. (2017)). However, the authors found that cosine-similarity maximization models as given in Eq. 2 fail to converge and added a scaling factor S ∈ R to multiply the logits vector as follows: argmin w1,...,wm,θ N∑ i=0 − log e S·cos(αyi )∑m j=1 e S·cos(αj) (3) This reformulation achieves improved results for face verification task, and many recent alternations also integrated the scaling factor S for convergences when optimizing the cosine-similarity Wang et al. (2018b); Wojke & Bewley (2018); Deng et al. (2019); Wang et al. (2018a); Fan et al. (2019). According to Wang et al. (2017), cosine-similarity maximization models fail to converge when S = 1 due to the low range of the logits vector, where each cell is bounded between [−1, 1]. This low range prevents the predicted probabilities from getting close to 1 during training, and as a result, the distribution over target classes is close to uniform, thus the loss will be trapped at a very high value on the training set. Intuitively, this may sound a reasonable explanation as to why directly maximizing the cosine-similarity fails to converge (S = 1). Note that even if an example is correctly classified and well separated, in its best scenario, it will achieve a cosine-similarity of 1 with its ground-truth class vector, while for other classes, the cosine-similarity would be (−1). Thus, for a classification task with m classes, the predicted probability for the example above would be: P (Y = yi|xi) = e1 e1 + (m− 1) · e−1 (4) Notice that if the number of classes m = 200, the predicted probability of the correctly classified example would be at most 0.035, and cannot be further optimized to 1. As a result, the loss function would yield a high value for a correctly classified example, even if its image vector is placed precisely in the same direction as its ground-truth class vector. As in the previous section, we trained the same models over the same datasets, but instead of optimizing the dot-product, we optimized the cosine-similarity by normalizing fθ(xi) and w1, ..., wm at the forward pass. We denote these models as non-fixed cosine-similarity maximization models. Additionally, we trained the same cosine-similarity maximization models with fixed random class vectors, denoting these models as fixed cosine-similarity maximization. In all models (fixed and non-fixed) we set S = 1 to directly maximize the cosine-similarity, results are shown in Table 2. Surprisingly, we reveal that the low range of the logits vector is not the cause preventing from cosine-similarity maximization models from converging. As can be seen in the table, fixed cosine-maximization models achieve significantly high accuracy results by up to 53% compared to non-fixed models. Moreover, it can be seen that fixed cosine-maximization models with S = 1 can also outperform dot-product maximization models. This finding demonstrates that while the logits are bounded between [−1, 1], the models can still learn high-quality representations and decision boundaries. We further investigated the effects of S and train for comparison the same fixed and non-fixed models, but this time we used grid-search for the best performing S value. As can be seen in Table 3, increasing the scaling factor S allows non-fixed models to achieve higher accuracies over all datasets. Yet, there is no benefit at learning the class representation vectors instead of randomly drawing them and fixing them during training when considering models’ accuracies. To better understand the cause which prevents non-fixed cosine-maximization models from converging when S = 1, we compared these models with the same models trained by setting the optimal S scalar. For each model we measured the distance between its learned class vectors and compared these distances to demonstrate the effect of S on them. Interestingly, we found that as S increased, the cosine-similarity between the class vectors decreased. Meaning that by increasing S the class vectors are further apart from each other. Compare, for example, the left and middle panels in Fig. 4, which show the cosine-similarity between the class vectors of models trained on STL with S = 1 and S = 20, respectively. On the right panel in Fig. 4, we plot the number of misclassification as a function of the cosinesimilarity between the class vectors of the non-fixed cosine-maximization model trained on STL10 with S = 1. It can be seen that the confusion between classes is high when they have low angular distance between them. As in previous section, we observed strong correlations between the closeness of the class vectors and the number of misclassification. We found correlations of 0.85, 0.87, 0.81, and 0.83 in models trained on STL-10, CIFAR-10, CIFAR-100, and Tiny ImageNet, respectively. By integrating the scaling factor S in Eq. 4 we get P (Y = yi|xi) = eS·1 eS·1 + (m− 1) · eS·(−1) (5) Note that by increasing S, the predicted probability in Eq. 5 increases. This is true even when the cosine-similarity between f(xi) and wyi is less than 1. When S is set to a large value, the gap between the logits increases, and the predicted probability after the softmax is closer to 1. As a result, the model is discouraged from optimizing the cosine-similarity between the image representation and its ground-truth class vector to be close to 1, since the loss is closer to 0. In Table 4, we show that as we increase S, the cosine-similarity between the image vectors and their predicted class vectors decreases. These observations can provide an explanation as to why non-fixed models with S = 1 fail to converge. By setting S to a large scalar, the image vectors are spread around their class vectors with a larger degree, preventing the class vectors from getting close to each other. As a result, the interclass separability increases and the misclassification rate between visually similar classes decreases. In contrast, setting S = 1 allows models to place the class vectors of visually similar classes closer in space and leads to a high number of misclassification. However, a disadvantage of increasing S and setting it to a large number is that the intra-class compactness is violated since image vectors from the same class are spread and encoded relatively far from each other, see Fig. 5. Fixed cosine-maximization models successfully converge when S = 1, since the class vectors are initially far in space from each other. By randomly drawing the class vectors, models are required to encode images from visually similar classes into vectors, which are far in space; therefore, the inter-class separability is high. Additionally, the intra-class compactness is improved since models are encouraged to maximize the cosine-similarity to 1 as S can be set to 1, and place image vectors from the same class close to their class vector. We validated this empirically by measuring the average cosine-similarity between image vectors and their predicted classes’ vectors in fixed cosinemaximization models with S = 1. We obtained an average cosine-similarity of roughly 0.95 in all experiments, meaning that images from the same class were encoded compactly near their class vectors. In conclusion, although non-fixed cosine-similarity maximization models were proposed to improve the caveats of dot-product maximization by improving the inter-class separability and intra-class compactness, their performance are significantly low without the integration of a scaling factor to multiply the logits vector. Integrating the scaling factor and setting it to S > 1 decrease intra-class compactness and introduce a trade-off between accuracy and intra-class compactness. By fixing the class vectors, cosine-similarity maximization models can have both high performance and improved intra-class compactness. Meaning that multiple previous works (Wang et al. (2018b); Wojke & Bewley (2018); Deng et al. (2019); Wang et al. (2018a); Fan et al. (2019)) that adopted the cosinemaximization method and integrated a scaling factor for convergence, might benefit from improved results by fixing the class vectors. 4 GENERALIZATION AND ROBUSTNESS TO CORRUPTIONS In this section we explore the generalization of the evaluated models to the learned concepts and measure their robustness to image corruptions. We do not aim to set a state-of-the-art results but rather validate that by fixing the class vectors of a model, the model’s generalization ability and robustness to corruptions remain competitive. 4.1 TRAINING PROCEDURE To evaluate the impact of ignoring the visual similarities in the classification layer we evaluated the models on CIFAR-10, CIFAR-100 Krizhevsky et al. (2009), STL Coates et al. (2011), and Tiny ImageNet1 (containing 10, 100, 10, and 200 classes, respectively). For each dataset, we trained Resnet18 He et al. (2016a), PreActResnet18 He et al. (2016b), and MobileNetV2 Sandler et al. (2018) models 1https://tiny-imagenet.herokuapp.com/ with fixed and non-fixed class vectors. All models were trained using stochastic gradient descent with momentum. We used the standard normalization and data augmentation techniques. Due to space limitations, the values of the hyperparameters used for training the models can be found under our code repository. We normalized the randomly drawn, fixed class representation vectors by dividing them with their l2-norm. All reported results are the averaged results of 3 runs. 4.2 GENERALIZATION For measuring how well the models were able to generalize to the learned concepts, we evaluated them on images containing objects from the same target classes appearing in their training dataset. For evaluating the models trained on STL-10 and CIFAR-100, we manually collected 2000 and 6000 images ,respectively, from the publicly available dataset Open Images V4 Krasin et al. (2017). For CIFAR-10 we used the CIFAR-10.1 dataset Recht et al. (2018). All collected sets contain an equal number of images for each class. We omitted models trained on Tiny ImageNet from the evaluation since we were not able to collect images for all classes appearing in this set. Table 5 summarizes the results for all the models. Results suggest that excluding the class representation vectors from training, does not decrease the generalization to learned concepts. 4.3 ROBUSTNESS TO CORRUPTIONS Next, we verified that excluding the class vectors from training did not decrease the model’s robustness to image corruptions. For this we apply three types of algorithmically generated corruptions on the test set and evaluate the accuracy of the models on these sets. The corruptions we apply are impulse-noise, JPEG compression, and de-focus blur. Corruptions are generated using Jung (2018), and available under our repository. Results, as shown in Table 6, suggest that randomly drawn fixed class vectors allow models to be highly robust to image corruptions. 5 CONCLUSION In this paper, we propose randomly drawing the parameters of the classification layer and excluding them from training. We showed that by this, the inter-class separability, intra-class compactness, and the overall accuracy of the model can improve when maximizing the dot-product or the cosine similarity between the image representation and the class vectors. We analyzed the cause that prevents the non-fixed cosine-maximization models from converging. We also presented the generalization abilities of the fixed and not-fixed classification layer.
1. What is the focus of the paper in terms of the specific aspect of supervised learning it explores? 2. What is the core idea of the paper, and how does it relate to previous works on cosine similarity losses? 3. What are the strengths and weaknesses of the paper's contributions, particularly in relation to its experimental results? 4. How does the reviewer assess the significance and novelty of the paper's findings, especially compared to prior research on large-scale face recognition and ImageNet? 5. Are there any concerns or limitations regarding the scope or scale of the paper's experiments that the reviewer identifies?
Review
Review The paper explores deeper into the specific classification layer of a standard supervised learning system. The core idea of the paper is to randomly initialize and then fix the classification layer weights and train the network leading improved discrimination. The writing is satisfactory and the paper develops the ideas sufficiently well to help any reader who is a beginner in this area. One of the major concerns regarding the work is that it seems to have is the relatively limited amount of contribution given the context of the current venue. This is no doubt an interesting phenomenon, however, previous works investigating cosine similarity losses have tested their approaches on much larger problems such as large scale face recognition and full Imagenet. The paper currently derives its intuitions from object recognition problems which have very different behavior than problems like face recognition where the number of classes is large yet the number of samples per class is much lower. That said, given the limited scale of the experiments, the paper does offer a wider variety of results supporting its claims. Nonetheless, given the simplicity of the idea, the paper fails to push envelope of results on any of these datasets. Lastly, the performance gains in Table 3 seem limited, given that only one run was performed for each dataset.
ICLR
Title Redesigning the Classification Layer by Randomizing the Class Representation Vectors Abstract Neural image classification models typically consist of two components. The first is an image encoder, which is responsible for encoding a given raw image into a representative vector. The second is the classification component, which is often implemented by projecting the representative vector onto target class vectors. The target class vectors, along with the rest of the model parameters, are estimated so as to minimize the loss function. In this paper, we analyze how simple design choices for the classification layer affect the learning dynamics. We show that the standard cross-entropy training implicitly captures visual similarities between different classes, which might deteriorate accuracy or even prevents some models from converging. We propose to draw the class vectors randomly and set them as fixed during training, thus invalidating the visual similarities encoded in these vectors. We analyze the effects of keeping the class vectors fixed and show that it can increase the inter-class separability, intra-class compactness, and the overall model accuracy, while maintaining the robustness to image corruptions and the generalization of the learned concepts. 1 INTRODUCTION Deep learning models achieved breakthroughs in classification tasks, allowing setting state-of-theart results in various fields such as speech recognition (Chiu et al., 2018), natural language processing (Vaswani et al., 2017), and computer vision (Huang et al., 2017). In image classification task, the most common approach of training the models is as follows: first, a convolutional neural network (CNN) is used to extract a representative vector, denoted here as image representation vector (also known as the feature vector). Then, at the classification layer, this vector is projected onto a set of weight vectors of the different target classes to create the class scores, as depicted in Fig. 1. Last, a softmax function is applied to normalize the class scores. During training, the parameters of both the CNN and the classification layer are updated to minimize the cross-entropy loss. We refer to this procedure as the dot-product maximization approach since such training ends up maximizing the dot-product between the image representation vector and the target weight vector. Recently, it was demonstrated that despite the excellent performance of the dot-product maximization approach, it does not necessarily encourage discriminative learning of features, nor does it enforce the intra-class compactness and inter-class separability (Liu et al., 2016; Wang et al., 2017; Liu et al., 2017). The intra-class compactness indicates how close image representations from the same class relate to each other, whereas the inter-class separability indicates how far away image representations from different classes are. Several works have proposed different approaches to address these caveats (Liu et al., 2016; 2017; Wang et al., 2017; 2018b;a). One of the most effective yet most straightforward solutions that were proposed is NormFace (Wang et al., 2017), where it was suggested to maximize the cosine-similarity between vectors by normalizing both the image and class vectors. However, the authors found when minimizing the cosine-similarity directly, the models fail to converge, and hypothesized that the cause is due to the bounded range of the logits vector. To allow convergence, the authors added a scaling factor to multiply the logits vector. This approach has been widely adopted by multiple works (Wang et al., 2018b; Wojke & Bewley, 2018; Deng et al., 2019; Wang et al., 2018a; Fan et al., 2019). Here we will refer to this approach as the cosine-similarity maximization approach. This paper is focused on redesigning the classification layer, and the its role while kept fixed during training. We show that the visual similarity between classes is implicitly captured by the class vectors when they are learned by maximizing either the dot-product or the cosine-similarity between the image representation vector and the class vectors. Then we show that the class vectors of visually similar categories are close in their angle in the space. We investigate the effects of excluding the class vectors from training and simply drawing them randomly distributed over a hypersphere. We demonstrate that this process, which eliminates the visual similarities from the classification layer, boosts accuracy, and improves the inter-class separability (using either dot-product maximization or cosine-similarity maximization). Moreover, we show that fixing the class representation vectors can solve the issues preventing from some cases to converge (under the cosine-similarity maximization approach), and can further increase the intra-class compactness. Last, we show that the generalization to the learned concepts and robustness to noise are both not influenced by ignoring the visual similarities encoded in the class vectors. Recent work by Hoffer et al. (2018), suggested to fix the classification layer to allow increased computational and memory efficiencies. The authors showed that the performance of models with fixed classification layer are on par or slightly drop (up to 0.5% in absolute accuracy) when compared to models with non-fixed classification layer. However, this technique allows substantial reduction in the number of learned parameters. In the paper, the authors compared the performance of dot-product maximization models with a non-fixed classification layer against the performance of cosine-similarity maximization models with a fixed classification layer and integrated scaling factor. Such comparison might not indicate the benefits of fixing the classification layer, since the dotproduct maximization is linear with respect to the image representation while the cosine-similarity maximization is not. On the other hand, in our paper, we compare fixed and non-fixed dot-product maximization models as well as fixed and non-fixed cosine-maximization models, and show that by fixing the classification layer the accuracy might boost by up to 4% in absolute accuracy. Moreover, while cosine-maximization models were suggested to improve the intra-class compactness, we reveal that by integrating a scaling factor to multiply the logits, the intra-class compactness is decreased. We demonstrate that by fixing the classification layer in cosine-maximization models, the models can converge and achieve a high performance without the scaling factor, and significantly improve their intra-class compactness. The outline of this paper is as follows. In Sections 2 and 3, we formulate dot-product and cosinesimilarity maximization models, respectively, and analyze the effects of fixing the class vectors. In Section 4, we describe the training procedure, compare the learning dynamics, and asses the generalization and robustness to corruptions of the evaluated models. We conclude the paper in Section 5. 2 FIXED DOT-PRODUCT MAXIMIZATION Assume an image classification task with m possible classes. Denote the training set of N examples by S = {(xi, yi)}Ni=1, where xi ∈ X is the i-th instance, and yi is the corresponding class such that yi ∈ {1, ...,m}. In image classification a dot-product maximization model consists of two parts. The first is the image encoder, denoted as fθ : X → Rd, which is responsible for representing the input image as a d-dimensional vector, fθ(x) ∈ Rd, where θ is a set of learnable parameters. The second part of the model is the classification layer, which is composed of learnable parameters denoted as W ∈ Rm×d. Matrix W can be viewed as m vectors, w1, . . . , wm, where each vector wi ∈ Rd can be considered as the representation vector associated with the i-th class. For simplicity, we omitted the bias terms and assumed they can be included in W . A consideration that is taken when designing the classification layer is choosing the operation applied between the matrix W and the image representation vector fθ(x). Most commonly, a dotproduct operation is used, and the resulting vector is referred to as the logits vector. For training the models, a softmax operation is applied over the logits vector, and the result is given to a crossentropy loss which should be minimized. That is, argmin w1,...,wm,θ N∑ i=0 − log e wyi ·fθ(xi)∑m j=1 e wj ·fθ(xi) = argmin w1,...,wm,θ N∑ i=0 − log e ‖wyi‖ ‖fθ(xi)‖ cos(αyi )∑m j=1 e ‖wj‖ ‖fθ(xi)‖ cos(αj) . (1) The equality holds since wyi ·fθ(xi) = ‖wyi‖‖fθ(xi)‖ cos(αyi), where αk is the angle between the vectors wk and fθ(xi). We trained three dot-product maximization models with different known CNN architectures over four datasets, varying in image size and number of classes, as described in detail in Section 4.1. Since these models optimize the dot-product between the image vector and its corresponding learnable class vectors, we refer to these models as non-fixed dot-product maximization models. Inspecting the matrix W of the trained models reveals that visually similar classes have their corresponding class vectors close in space. On the left panel of Fig. 2, we plot the cosine-similarity between the class vectors that were learned by the non-fixed model which was trained on the STL10 dataset. It can be seen that the vectors representing vehicles are relatively close to each other, and far away from vectors representing animals. Furthermore, when we inspect the class vectors of non-fixed models trained on CIFAR-100 (100 classes) and Tiny ImageNet (200 classes), we find even larger similarities between vectors due to the high visual similarities between classes, such as boy and girl or apple and orange. By placing the vectors of visually similar classes close to each other, the inter-class separability is decreased. Moreover, we find a strong spearman correlation between the distance of class vectors and the number of misclassified examples. On the right panel of Fig. 2, we plot the cosine-similarity between two class vectors, wi and wj , against the number of examples from category i that were wrongly classified as category j. As shown in the figure, as the class vectors are closer in space, the number of misclassifications increases. In STL-10, CIFAR-10, CIFAR-100, and Tiny ImageNet, we find a correlation of 0.82, 0.77, 0.61, and 0.79, respectively (note that all possible class pairs were considered in the computation of the correlation). These findings reveal that as two class vectors are closer in space, the confusion between the two corresponding classes increases. We examined whether the models benefit from the high angular similarities between the vectors. We trained the same models, but instead of learning the class vectors, we drew them randomly, normalized them (‖wj‖ = 1), and kept them fixed during training. We refer to these models as the fixed dot-product maximization models. Since the target vectors are initialized randomly, the cosine-similarity between vectors is low even for visually similar classes. See the middle panel of Fig. 2. Notice that by fixing the class vectors and bias term during training, the model can minimize the loss in Eq. 1 only by optimizing the vector fθ(xi). It can be seen that by fixing the class vectors, the prediction is influenced mainly by the angle between fθ and the fixed wyi since the magnitude of fθ(xi) is multiplied with all classes and the magnitude of each class vectors is equal and set to 1. Thus, the model is forced to optimize the angle of the image vector towards its randomized class vector. Table 1 compares the classification accuracy of models with a fixed and non-fixed classification layer. Results suggest that learning the matrix W during training is not necessarily beneficial, and might reduce accuracy when the number of classes is high, or when the classes are visually close. Additionally, we empirically found that models with fixed class vectors can be trained with higher learning rate, due to space limitation we bring the results in the appendix (Table 7, Table 8, Table 9). By randomly drawing the class vectors, we ignore possible visual similarities between classes and force the models to minimize the loss by increasing the inter-class separability and encoding images from visually similar classes into vectors far in space, see Fig. 3. 3 FIXED COSINE-SIMILARITY MAXIMIZATION Recently, cosine-similarity maximization models were proposed by Wang et al. (2017) for face verification task. The authors maximized the cosine-similarity, rather than the dot-product, between the image vector and its corresponding class vector. That is, argmin w1,...,wm,θ N∑ i=0 − log e cos(αyi )∑m j=1 e cos(αj) = argmin w1,...,wm,θ N∑ i=0 − log e wyi ·fθ(xi) ‖wyi‖‖fθ(xi)‖∑m j=1 e wj ·fθ(xi) ‖wj‖‖fθ(xi)‖ (2) Comparing the right-hand side of Eq. 2 with Eq. 1 shows that the cosine-similarity maximization model simply requires normalizing fθ(x), and each of the class representation vectors w1, ..., wm, by dividing them with their l2-norm during the forward pass. The main motivation for this reformulation is the ability to learn more discriminative features in face verification by encouraging intra-class compactness and enlarging the inter-class separability. The authors showed that dot-product maximization models learn radial feature distribution; thus, the inter-class separability and intra-class compactness are not optimal (for more details, see the discussion in Wang et al. (2017)). However, the authors found that cosine-similarity maximization models as given in Eq. 2 fail to converge and added a scaling factor S ∈ R to multiply the logits vector as follows: argmin w1,...,wm,θ N∑ i=0 − log e S·cos(αyi )∑m j=1 e S·cos(αj) (3) This reformulation achieves improved results for face verification task, and many recent alternations also integrated the scaling factor S for convergences when optimizing the cosine-similarity Wang et al. (2018b); Wojke & Bewley (2018); Deng et al. (2019); Wang et al. (2018a); Fan et al. (2019). According to Wang et al. (2017), cosine-similarity maximization models fail to converge when S = 1 due to the low range of the logits vector, where each cell is bounded between [−1, 1]. This low range prevents the predicted probabilities from getting close to 1 during training, and as a result, the distribution over target classes is close to uniform, thus the loss will be trapped at a very high value on the training set. Intuitively, this may sound a reasonable explanation as to why directly maximizing the cosine-similarity fails to converge (S = 1). Note that even if an example is correctly classified and well separated, in its best scenario, it will achieve a cosine-similarity of 1 with its ground-truth class vector, while for other classes, the cosine-similarity would be (−1). Thus, for a classification task with m classes, the predicted probability for the example above would be: P (Y = yi|xi) = e1 e1 + (m− 1) · e−1 (4) Notice that if the number of classes m = 200, the predicted probability of the correctly classified example would be at most 0.035, and cannot be further optimized to 1. As a result, the loss function would yield a high value for a correctly classified example, even if its image vector is placed precisely in the same direction as its ground-truth class vector. As in the previous section, we trained the same models over the same datasets, but instead of optimizing the dot-product, we optimized the cosine-similarity by normalizing fθ(xi) and w1, ..., wm at the forward pass. We denote these models as non-fixed cosine-similarity maximization models. Additionally, we trained the same cosine-similarity maximization models with fixed random class vectors, denoting these models as fixed cosine-similarity maximization. In all models (fixed and non-fixed) we set S = 1 to directly maximize the cosine-similarity, results are shown in Table 2. Surprisingly, we reveal that the low range of the logits vector is not the cause preventing from cosine-similarity maximization models from converging. As can be seen in the table, fixed cosine-maximization models achieve significantly high accuracy results by up to 53% compared to non-fixed models. Moreover, it can be seen that fixed cosine-maximization models with S = 1 can also outperform dot-product maximization models. This finding demonstrates that while the logits are bounded between [−1, 1], the models can still learn high-quality representations and decision boundaries. We further investigated the effects of S and train for comparison the same fixed and non-fixed models, but this time we used grid-search for the best performing S value. As can be seen in Table 3, increasing the scaling factor S allows non-fixed models to achieve higher accuracies over all datasets. Yet, there is no benefit at learning the class representation vectors instead of randomly drawing them and fixing them during training when considering models’ accuracies. To better understand the cause which prevents non-fixed cosine-maximization models from converging when S = 1, we compared these models with the same models trained by setting the optimal S scalar. For each model we measured the distance between its learned class vectors and compared these distances to demonstrate the effect of S on them. Interestingly, we found that as S increased, the cosine-similarity between the class vectors decreased. Meaning that by increasing S the class vectors are further apart from each other. Compare, for example, the left and middle panels in Fig. 4, which show the cosine-similarity between the class vectors of models trained on STL with S = 1 and S = 20, respectively. On the right panel in Fig. 4, we plot the number of misclassification as a function of the cosinesimilarity between the class vectors of the non-fixed cosine-maximization model trained on STL10 with S = 1. It can be seen that the confusion between classes is high when they have low angular distance between them. As in previous section, we observed strong correlations between the closeness of the class vectors and the number of misclassification. We found correlations of 0.85, 0.87, 0.81, and 0.83 in models trained on STL-10, CIFAR-10, CIFAR-100, and Tiny ImageNet, respectively. By integrating the scaling factor S in Eq. 4 we get P (Y = yi|xi) = eS·1 eS·1 + (m− 1) · eS·(−1) (5) Note that by increasing S, the predicted probability in Eq. 5 increases. This is true even when the cosine-similarity between f(xi) and wyi is less than 1. When S is set to a large value, the gap between the logits increases, and the predicted probability after the softmax is closer to 1. As a result, the model is discouraged from optimizing the cosine-similarity between the image representation and its ground-truth class vector to be close to 1, since the loss is closer to 0. In Table 4, we show that as we increase S, the cosine-similarity between the image vectors and their predicted class vectors decreases. These observations can provide an explanation as to why non-fixed models with S = 1 fail to converge. By setting S to a large scalar, the image vectors are spread around their class vectors with a larger degree, preventing the class vectors from getting close to each other. As a result, the interclass separability increases and the misclassification rate between visually similar classes decreases. In contrast, setting S = 1 allows models to place the class vectors of visually similar classes closer in space and leads to a high number of misclassification. However, a disadvantage of increasing S and setting it to a large number is that the intra-class compactness is violated since image vectors from the same class are spread and encoded relatively far from each other, see Fig. 5. Fixed cosine-maximization models successfully converge when S = 1, since the class vectors are initially far in space from each other. By randomly drawing the class vectors, models are required to encode images from visually similar classes into vectors, which are far in space; therefore, the inter-class separability is high. Additionally, the intra-class compactness is improved since models are encouraged to maximize the cosine-similarity to 1 as S can be set to 1, and place image vectors from the same class close to their class vector. We validated this empirically by measuring the average cosine-similarity between image vectors and their predicted classes’ vectors in fixed cosinemaximization models with S = 1. We obtained an average cosine-similarity of roughly 0.95 in all experiments, meaning that images from the same class were encoded compactly near their class vectors. In conclusion, although non-fixed cosine-similarity maximization models were proposed to improve the caveats of dot-product maximization by improving the inter-class separability and intra-class compactness, their performance are significantly low without the integration of a scaling factor to multiply the logits vector. Integrating the scaling factor and setting it to S > 1 decrease intra-class compactness and introduce a trade-off between accuracy and intra-class compactness. By fixing the class vectors, cosine-similarity maximization models can have both high performance and improved intra-class compactness. Meaning that multiple previous works (Wang et al. (2018b); Wojke & Bewley (2018); Deng et al. (2019); Wang et al. (2018a); Fan et al. (2019)) that adopted the cosinemaximization method and integrated a scaling factor for convergence, might benefit from improved results by fixing the class vectors. 4 GENERALIZATION AND ROBUSTNESS TO CORRUPTIONS In this section we explore the generalization of the evaluated models to the learned concepts and measure their robustness to image corruptions. We do not aim to set a state-of-the-art results but rather validate that by fixing the class vectors of a model, the model’s generalization ability and robustness to corruptions remain competitive. 4.1 TRAINING PROCEDURE To evaluate the impact of ignoring the visual similarities in the classification layer we evaluated the models on CIFAR-10, CIFAR-100 Krizhevsky et al. (2009), STL Coates et al. (2011), and Tiny ImageNet1 (containing 10, 100, 10, and 200 classes, respectively). For each dataset, we trained Resnet18 He et al. (2016a), PreActResnet18 He et al. (2016b), and MobileNetV2 Sandler et al. (2018) models 1https://tiny-imagenet.herokuapp.com/ with fixed and non-fixed class vectors. All models were trained using stochastic gradient descent with momentum. We used the standard normalization and data augmentation techniques. Due to space limitations, the values of the hyperparameters used for training the models can be found under our code repository. We normalized the randomly drawn, fixed class representation vectors by dividing them with their l2-norm. All reported results are the averaged results of 3 runs. 4.2 GENERALIZATION For measuring how well the models were able to generalize to the learned concepts, we evaluated them on images containing objects from the same target classes appearing in their training dataset. For evaluating the models trained on STL-10 and CIFAR-100, we manually collected 2000 and 6000 images ,respectively, from the publicly available dataset Open Images V4 Krasin et al. (2017). For CIFAR-10 we used the CIFAR-10.1 dataset Recht et al. (2018). All collected sets contain an equal number of images for each class. We omitted models trained on Tiny ImageNet from the evaluation since we were not able to collect images for all classes appearing in this set. Table 5 summarizes the results for all the models. Results suggest that excluding the class representation vectors from training, does not decrease the generalization to learned concepts. 4.3 ROBUSTNESS TO CORRUPTIONS Next, we verified that excluding the class vectors from training did not decrease the model’s robustness to image corruptions. For this we apply three types of algorithmically generated corruptions on the test set and evaluate the accuracy of the models on these sets. The corruptions we apply are impulse-noise, JPEG compression, and de-focus blur. Corruptions are generated using Jung (2018), and available under our repository. Results, as shown in Table 6, suggest that randomly drawn fixed class vectors allow models to be highly robust to image corruptions. 5 CONCLUSION In this paper, we propose randomly drawing the parameters of the classification layer and excluding them from training. We showed that by this, the inter-class separability, intra-class compactness, and the overall accuracy of the model can improve when maximizing the dot-product or the cosine similarity between the image representation and the class vectors. We analyzed the cause that prevents the non-fixed cosine-maximization models from converging. We also presented the generalization abilities of the fixed and not-fixed classification layer.
1. What is the main contribution of the paper, and how does it relate to multi-class image classification? 2. What are the strengths and weaknesses of the proposed approach in terms of its technical contribution and empirical validation? 3. How does the paper's title relate to its content, and could it be improved to better reflect the paper's focus? 4. How could the paper's structure be improved, and what information could be reorganized or added to make it easier to read? 5. How do the authors propose to initialize the weights of the classification layer, and how do they compare different initialization mechanisms? 6. How does the paper address the sensitivity of the fixed weights approach to random initialization, and how could this be analyzed further? 7. What is the authors' motivation for fixing the bias, and how could this be justified or improved? 8. How could the paper better analyze the bias initialization, and what implications could this have for the model's performance? 9. How could the paper better show the variance in the model's evaluation when run multiple times, and why is this important? 10. How did the authors choose the hyperparameters for their experiments, and how could these choices affect the conclusions drawn from the analysis? 11. How does the paper explain the concept of compactness in relation to the proposed model, and how could this be measured and presented more clearly? 12. How could the paper extend its results to low-resolution datasets, larger datasets, and other types of datasets like fine-grained datasets, medical images, and natural scenes? 13. How does the author's approach differ from other popular optimizers like Adam, and how could this impact the results? 14. How could the paper improve its analysis of the range of logits and its relationship to cosine similarity maximization models? 15. What additional information could be provided regarding the generation of figures 3 and 5?
Review
Review Summary: This paper introduces a new approach to learn a multi-class image classification model by fixing the weights of the classification layer. The authors propose to draw the class vectors randomly and set them as fixed during training instead of training them. They analyze this approach when a model is trained with a categorical cross-entropy and or softmax-cosine loss. The proposed approach is tested on 4 datasets: STL, CIFAR-10, CIFAR-100, TinyImagenet Reasons for score: I do not think the technical contribution is strong enough for ICLR. The idea is interesting but the empirical validation of the idea should be improved and some claims should be proved. Pros: The idea of using fixed-representation is interesting. It can help to reduce the training time. The authors explain why cosine-similarity maximization models cannot converge to 0. Cons: The title looks very interesting: “Redesigning the Classification Layer by Randomizing the Class Representation Vectors”. But after reading the paper, it is only about mullti-class image classification. There is no study about other types of data or the multi-label setting. The authors should use a title more accurate about the content of their paper. Overall, the structure of the paper should be improved. It is quite difficult to read because several sections are a mix of model contributions and experimental results. Maybe using subsections can help to separate the model contributions and experimental results. Also, some information is not at the right place and some sections should be reorganized. For example, the datasets and models are presented in section 4.1 but some results are presented in section 2. The authors should also add a related work section to clearly state the motivations and explain the difference with other approaches. The authors proposed to randomly initialize the weights of the classification layer but they do not clearly explain how the weights are initialized. There are several standard approaches to initialize weights like uniform, normal, Xavier uniform, Xavier normal, Kaiming uniform, Kaiming normal. It can improve the paper if the authors compare these initialization mechanisms. Similarly, the authors should analyze the results for several runs to see how the fixed weights approach is sensitive to the random initialization. I have a conceptual problem with fixing the bias. The bias is sampled so it means it can have a large or small value. Let’s take an example with 2 classes. The class A can have a large bias (e.g. 0.5) but other class B can have a small value (e.g. -0.5). It means that the class B has a negative bias and will usually have lower scores than A just because there is a difference of 1 between these biases. I am not sure that it is a good idea and there is no motivation about that in the paper. The authors should analyze the bias initialization because it is important. It is important to show the variance when the model is evaluated on several runs (section 4). It can help to understand how the model is sensible to the initialization. It is well known that the SGD is sensible to its hyper-parameter and in particular the learning rate. The model will not converge if the learning rate is too large or too small. The authors should explain how they choose the hyper-parameters. I also wonder how the results are specific to the optimizer. Are the conclusions of the analysis the same for other popular optimizers like Adam. “These observations can provide an explanation as to why non-fixed models with S = 1 fail to converge.” (page 6): For me it explains why the model cannot converge to 0 but it does not explain why the model fails to converge. They are two different problems. In the abstract and in some other parts of the paper the authors claim they improve the compactness of the model. But they never show it. They did not define how they measure the compactness of a model. They should clearly present the definition of compactness, and what approach they used to compute it. Based on my knowledge, measuring the compactness of a model is not easy. The authors should results on low resolution dataset (less than 100*100). I wonder if the results can be generalized to larger resolution dataset. For example, does it also work on ImageNet that has more images, larger resolution images and more classes (1000). I also wonder if it works on other type of datasets like fine-grained datasets (e.g. CUB-200, Stanford Cars, FGVC Aircraft). Also, how does it adapt to new domains like medical images and natural scenes. I am not convinced that ignoring the visual similarities between classes is a good idea. I think it is important to build spaces that encode some semantic structure. For example, I think it is important to encode that two bird species are more semantically similar than a bird and a car. It is not clear why the authors decided to focus on the cosine-similarity maximization models. They should motivate this decision more because these models are not so popular. The authors claimed that “the low range of the logits vector is not the cause preventing from cosine-similarity maximization models from converging” (page 5) but they did not show results to prove it. The authors should analyze the range of the logits. The current analysis does not allow us to understand if it is because of the range of value, or the normalization of the weights or a bad tuning of some hyper-parameters. Minor comments: The authors should give more information on how they generated the figures 3 and 5.
ICLR
Title Redesigning the Classification Layer by Randomizing the Class Representation Vectors Abstract Neural image classification models typically consist of two components. The first is an image encoder, which is responsible for encoding a given raw image into a representative vector. The second is the classification component, which is often implemented by projecting the representative vector onto target class vectors. The target class vectors, along with the rest of the model parameters, are estimated so as to minimize the loss function. In this paper, we analyze how simple design choices for the classification layer affect the learning dynamics. We show that the standard cross-entropy training implicitly captures visual similarities between different classes, which might deteriorate accuracy or even prevents some models from converging. We propose to draw the class vectors randomly and set them as fixed during training, thus invalidating the visual similarities encoded in these vectors. We analyze the effects of keeping the class vectors fixed and show that it can increase the inter-class separability, intra-class compactness, and the overall model accuracy, while maintaining the robustness to image corruptions and the generalization of the learned concepts. 1 INTRODUCTION Deep learning models achieved breakthroughs in classification tasks, allowing setting state-of-theart results in various fields such as speech recognition (Chiu et al., 2018), natural language processing (Vaswani et al., 2017), and computer vision (Huang et al., 2017). In image classification task, the most common approach of training the models is as follows: first, a convolutional neural network (CNN) is used to extract a representative vector, denoted here as image representation vector (also known as the feature vector). Then, at the classification layer, this vector is projected onto a set of weight vectors of the different target classes to create the class scores, as depicted in Fig. 1. Last, a softmax function is applied to normalize the class scores. During training, the parameters of both the CNN and the classification layer are updated to minimize the cross-entropy loss. We refer to this procedure as the dot-product maximization approach since such training ends up maximizing the dot-product between the image representation vector and the target weight vector. Recently, it was demonstrated that despite the excellent performance of the dot-product maximization approach, it does not necessarily encourage discriminative learning of features, nor does it enforce the intra-class compactness and inter-class separability (Liu et al., 2016; Wang et al., 2017; Liu et al., 2017). The intra-class compactness indicates how close image representations from the same class relate to each other, whereas the inter-class separability indicates how far away image representations from different classes are. Several works have proposed different approaches to address these caveats (Liu et al., 2016; 2017; Wang et al., 2017; 2018b;a). One of the most effective yet most straightforward solutions that were proposed is NormFace (Wang et al., 2017), where it was suggested to maximize the cosine-similarity between vectors by normalizing both the image and class vectors. However, the authors found when minimizing the cosine-similarity directly, the models fail to converge, and hypothesized that the cause is due to the bounded range of the logits vector. To allow convergence, the authors added a scaling factor to multiply the logits vector. This approach has been widely adopted by multiple works (Wang et al., 2018b; Wojke & Bewley, 2018; Deng et al., 2019; Wang et al., 2018a; Fan et al., 2019). Here we will refer to this approach as the cosine-similarity maximization approach. This paper is focused on redesigning the classification layer, and the its role while kept fixed during training. We show that the visual similarity between classes is implicitly captured by the class vectors when they are learned by maximizing either the dot-product or the cosine-similarity between the image representation vector and the class vectors. Then we show that the class vectors of visually similar categories are close in their angle in the space. We investigate the effects of excluding the class vectors from training and simply drawing them randomly distributed over a hypersphere. We demonstrate that this process, which eliminates the visual similarities from the classification layer, boosts accuracy, and improves the inter-class separability (using either dot-product maximization or cosine-similarity maximization). Moreover, we show that fixing the class representation vectors can solve the issues preventing from some cases to converge (under the cosine-similarity maximization approach), and can further increase the intra-class compactness. Last, we show that the generalization to the learned concepts and robustness to noise are both not influenced by ignoring the visual similarities encoded in the class vectors. Recent work by Hoffer et al. (2018), suggested to fix the classification layer to allow increased computational and memory efficiencies. The authors showed that the performance of models with fixed classification layer are on par or slightly drop (up to 0.5% in absolute accuracy) when compared to models with non-fixed classification layer. However, this technique allows substantial reduction in the number of learned parameters. In the paper, the authors compared the performance of dot-product maximization models with a non-fixed classification layer against the performance of cosine-similarity maximization models with a fixed classification layer and integrated scaling factor. Such comparison might not indicate the benefits of fixing the classification layer, since the dotproduct maximization is linear with respect to the image representation while the cosine-similarity maximization is not. On the other hand, in our paper, we compare fixed and non-fixed dot-product maximization models as well as fixed and non-fixed cosine-maximization models, and show that by fixing the classification layer the accuracy might boost by up to 4% in absolute accuracy. Moreover, while cosine-maximization models were suggested to improve the intra-class compactness, we reveal that by integrating a scaling factor to multiply the logits, the intra-class compactness is decreased. We demonstrate that by fixing the classification layer in cosine-maximization models, the models can converge and achieve a high performance without the scaling factor, and significantly improve their intra-class compactness. The outline of this paper is as follows. In Sections 2 and 3, we formulate dot-product and cosinesimilarity maximization models, respectively, and analyze the effects of fixing the class vectors. In Section 4, we describe the training procedure, compare the learning dynamics, and asses the generalization and robustness to corruptions of the evaluated models. We conclude the paper in Section 5. 2 FIXED DOT-PRODUCT MAXIMIZATION Assume an image classification task with m possible classes. Denote the training set of N examples by S = {(xi, yi)}Ni=1, where xi ∈ X is the i-th instance, and yi is the corresponding class such that yi ∈ {1, ...,m}. In image classification a dot-product maximization model consists of two parts. The first is the image encoder, denoted as fθ : X → Rd, which is responsible for representing the input image as a d-dimensional vector, fθ(x) ∈ Rd, where θ is a set of learnable parameters. The second part of the model is the classification layer, which is composed of learnable parameters denoted as W ∈ Rm×d. Matrix W can be viewed as m vectors, w1, . . . , wm, where each vector wi ∈ Rd can be considered as the representation vector associated with the i-th class. For simplicity, we omitted the bias terms and assumed they can be included in W . A consideration that is taken when designing the classification layer is choosing the operation applied between the matrix W and the image representation vector fθ(x). Most commonly, a dotproduct operation is used, and the resulting vector is referred to as the logits vector. For training the models, a softmax operation is applied over the logits vector, and the result is given to a crossentropy loss which should be minimized. That is, argmin w1,...,wm,θ N∑ i=0 − log e wyi ·fθ(xi)∑m j=1 e wj ·fθ(xi) = argmin w1,...,wm,θ N∑ i=0 − log e ‖wyi‖ ‖fθ(xi)‖ cos(αyi )∑m j=1 e ‖wj‖ ‖fθ(xi)‖ cos(αj) . (1) The equality holds since wyi ·fθ(xi) = ‖wyi‖‖fθ(xi)‖ cos(αyi), where αk is the angle between the vectors wk and fθ(xi). We trained three dot-product maximization models with different known CNN architectures over four datasets, varying in image size and number of classes, as described in detail in Section 4.1. Since these models optimize the dot-product between the image vector and its corresponding learnable class vectors, we refer to these models as non-fixed dot-product maximization models. Inspecting the matrix W of the trained models reveals that visually similar classes have their corresponding class vectors close in space. On the left panel of Fig. 2, we plot the cosine-similarity between the class vectors that were learned by the non-fixed model which was trained on the STL10 dataset. It can be seen that the vectors representing vehicles are relatively close to each other, and far away from vectors representing animals. Furthermore, when we inspect the class vectors of non-fixed models trained on CIFAR-100 (100 classes) and Tiny ImageNet (200 classes), we find even larger similarities between vectors due to the high visual similarities between classes, such as boy and girl or apple and orange. By placing the vectors of visually similar classes close to each other, the inter-class separability is decreased. Moreover, we find a strong spearman correlation between the distance of class vectors and the number of misclassified examples. On the right panel of Fig. 2, we plot the cosine-similarity between two class vectors, wi and wj , against the number of examples from category i that were wrongly classified as category j. As shown in the figure, as the class vectors are closer in space, the number of misclassifications increases. In STL-10, CIFAR-10, CIFAR-100, and Tiny ImageNet, we find a correlation of 0.82, 0.77, 0.61, and 0.79, respectively (note that all possible class pairs were considered in the computation of the correlation). These findings reveal that as two class vectors are closer in space, the confusion between the two corresponding classes increases. We examined whether the models benefit from the high angular similarities between the vectors. We trained the same models, but instead of learning the class vectors, we drew them randomly, normalized them (‖wj‖ = 1), and kept them fixed during training. We refer to these models as the fixed dot-product maximization models. Since the target vectors are initialized randomly, the cosine-similarity between vectors is low even for visually similar classes. See the middle panel of Fig. 2. Notice that by fixing the class vectors and bias term during training, the model can minimize the loss in Eq. 1 only by optimizing the vector fθ(xi). It can be seen that by fixing the class vectors, the prediction is influenced mainly by the angle between fθ and the fixed wyi since the magnitude of fθ(xi) is multiplied with all classes and the magnitude of each class vectors is equal and set to 1. Thus, the model is forced to optimize the angle of the image vector towards its randomized class vector. Table 1 compares the classification accuracy of models with a fixed and non-fixed classification layer. Results suggest that learning the matrix W during training is not necessarily beneficial, and might reduce accuracy when the number of classes is high, or when the classes are visually close. Additionally, we empirically found that models with fixed class vectors can be trained with higher learning rate, due to space limitation we bring the results in the appendix (Table 7, Table 8, Table 9). By randomly drawing the class vectors, we ignore possible visual similarities between classes and force the models to minimize the loss by increasing the inter-class separability and encoding images from visually similar classes into vectors far in space, see Fig. 3. 3 FIXED COSINE-SIMILARITY MAXIMIZATION Recently, cosine-similarity maximization models were proposed by Wang et al. (2017) for face verification task. The authors maximized the cosine-similarity, rather than the dot-product, between the image vector and its corresponding class vector. That is, argmin w1,...,wm,θ N∑ i=0 − log e cos(αyi )∑m j=1 e cos(αj) = argmin w1,...,wm,θ N∑ i=0 − log e wyi ·fθ(xi) ‖wyi‖‖fθ(xi)‖∑m j=1 e wj ·fθ(xi) ‖wj‖‖fθ(xi)‖ (2) Comparing the right-hand side of Eq. 2 with Eq. 1 shows that the cosine-similarity maximization model simply requires normalizing fθ(x), and each of the class representation vectors w1, ..., wm, by dividing them with their l2-norm during the forward pass. The main motivation for this reformulation is the ability to learn more discriminative features in face verification by encouraging intra-class compactness and enlarging the inter-class separability. The authors showed that dot-product maximization models learn radial feature distribution; thus, the inter-class separability and intra-class compactness are not optimal (for more details, see the discussion in Wang et al. (2017)). However, the authors found that cosine-similarity maximization models as given in Eq. 2 fail to converge and added a scaling factor S ∈ R to multiply the logits vector as follows: argmin w1,...,wm,θ N∑ i=0 − log e S·cos(αyi )∑m j=1 e S·cos(αj) (3) This reformulation achieves improved results for face verification task, and many recent alternations also integrated the scaling factor S for convergences when optimizing the cosine-similarity Wang et al. (2018b); Wojke & Bewley (2018); Deng et al. (2019); Wang et al. (2018a); Fan et al. (2019). According to Wang et al. (2017), cosine-similarity maximization models fail to converge when S = 1 due to the low range of the logits vector, where each cell is bounded between [−1, 1]. This low range prevents the predicted probabilities from getting close to 1 during training, and as a result, the distribution over target classes is close to uniform, thus the loss will be trapped at a very high value on the training set. Intuitively, this may sound a reasonable explanation as to why directly maximizing the cosine-similarity fails to converge (S = 1). Note that even if an example is correctly classified and well separated, in its best scenario, it will achieve a cosine-similarity of 1 with its ground-truth class vector, while for other classes, the cosine-similarity would be (−1). Thus, for a classification task with m classes, the predicted probability for the example above would be: P (Y = yi|xi) = e1 e1 + (m− 1) · e−1 (4) Notice that if the number of classes m = 200, the predicted probability of the correctly classified example would be at most 0.035, and cannot be further optimized to 1. As a result, the loss function would yield a high value for a correctly classified example, even if its image vector is placed precisely in the same direction as its ground-truth class vector. As in the previous section, we trained the same models over the same datasets, but instead of optimizing the dot-product, we optimized the cosine-similarity by normalizing fθ(xi) and w1, ..., wm at the forward pass. We denote these models as non-fixed cosine-similarity maximization models. Additionally, we trained the same cosine-similarity maximization models with fixed random class vectors, denoting these models as fixed cosine-similarity maximization. In all models (fixed and non-fixed) we set S = 1 to directly maximize the cosine-similarity, results are shown in Table 2. Surprisingly, we reveal that the low range of the logits vector is not the cause preventing from cosine-similarity maximization models from converging. As can be seen in the table, fixed cosine-maximization models achieve significantly high accuracy results by up to 53% compared to non-fixed models. Moreover, it can be seen that fixed cosine-maximization models with S = 1 can also outperform dot-product maximization models. This finding demonstrates that while the logits are bounded between [−1, 1], the models can still learn high-quality representations and decision boundaries. We further investigated the effects of S and train for comparison the same fixed and non-fixed models, but this time we used grid-search for the best performing S value. As can be seen in Table 3, increasing the scaling factor S allows non-fixed models to achieve higher accuracies over all datasets. Yet, there is no benefit at learning the class representation vectors instead of randomly drawing them and fixing them during training when considering models’ accuracies. To better understand the cause which prevents non-fixed cosine-maximization models from converging when S = 1, we compared these models with the same models trained by setting the optimal S scalar. For each model we measured the distance between its learned class vectors and compared these distances to demonstrate the effect of S on them. Interestingly, we found that as S increased, the cosine-similarity between the class vectors decreased. Meaning that by increasing S the class vectors are further apart from each other. Compare, for example, the left and middle panels in Fig. 4, which show the cosine-similarity between the class vectors of models trained on STL with S = 1 and S = 20, respectively. On the right panel in Fig. 4, we plot the number of misclassification as a function of the cosinesimilarity between the class vectors of the non-fixed cosine-maximization model trained on STL10 with S = 1. It can be seen that the confusion between classes is high when they have low angular distance between them. As in previous section, we observed strong correlations between the closeness of the class vectors and the number of misclassification. We found correlations of 0.85, 0.87, 0.81, and 0.83 in models trained on STL-10, CIFAR-10, CIFAR-100, and Tiny ImageNet, respectively. By integrating the scaling factor S in Eq. 4 we get P (Y = yi|xi) = eS·1 eS·1 + (m− 1) · eS·(−1) (5) Note that by increasing S, the predicted probability in Eq. 5 increases. This is true even when the cosine-similarity between f(xi) and wyi is less than 1. When S is set to a large value, the gap between the logits increases, and the predicted probability after the softmax is closer to 1. As a result, the model is discouraged from optimizing the cosine-similarity between the image representation and its ground-truth class vector to be close to 1, since the loss is closer to 0. In Table 4, we show that as we increase S, the cosine-similarity between the image vectors and their predicted class vectors decreases. These observations can provide an explanation as to why non-fixed models with S = 1 fail to converge. By setting S to a large scalar, the image vectors are spread around their class vectors with a larger degree, preventing the class vectors from getting close to each other. As a result, the interclass separability increases and the misclassification rate between visually similar classes decreases. In contrast, setting S = 1 allows models to place the class vectors of visually similar classes closer in space and leads to a high number of misclassification. However, a disadvantage of increasing S and setting it to a large number is that the intra-class compactness is violated since image vectors from the same class are spread and encoded relatively far from each other, see Fig. 5. Fixed cosine-maximization models successfully converge when S = 1, since the class vectors are initially far in space from each other. By randomly drawing the class vectors, models are required to encode images from visually similar classes into vectors, which are far in space; therefore, the inter-class separability is high. Additionally, the intra-class compactness is improved since models are encouraged to maximize the cosine-similarity to 1 as S can be set to 1, and place image vectors from the same class close to their class vector. We validated this empirically by measuring the average cosine-similarity between image vectors and their predicted classes’ vectors in fixed cosinemaximization models with S = 1. We obtained an average cosine-similarity of roughly 0.95 in all experiments, meaning that images from the same class were encoded compactly near their class vectors. In conclusion, although non-fixed cosine-similarity maximization models were proposed to improve the caveats of dot-product maximization by improving the inter-class separability and intra-class compactness, their performance are significantly low without the integration of a scaling factor to multiply the logits vector. Integrating the scaling factor and setting it to S > 1 decrease intra-class compactness and introduce a trade-off between accuracy and intra-class compactness. By fixing the class vectors, cosine-similarity maximization models can have both high performance and improved intra-class compactness. Meaning that multiple previous works (Wang et al. (2018b); Wojke & Bewley (2018); Deng et al. (2019); Wang et al. (2018a); Fan et al. (2019)) that adopted the cosinemaximization method and integrated a scaling factor for convergence, might benefit from improved results by fixing the class vectors. 4 GENERALIZATION AND ROBUSTNESS TO CORRUPTIONS In this section we explore the generalization of the evaluated models to the learned concepts and measure their robustness to image corruptions. We do not aim to set a state-of-the-art results but rather validate that by fixing the class vectors of a model, the model’s generalization ability and robustness to corruptions remain competitive. 4.1 TRAINING PROCEDURE To evaluate the impact of ignoring the visual similarities in the classification layer we evaluated the models on CIFAR-10, CIFAR-100 Krizhevsky et al. (2009), STL Coates et al. (2011), and Tiny ImageNet1 (containing 10, 100, 10, and 200 classes, respectively). For each dataset, we trained Resnet18 He et al. (2016a), PreActResnet18 He et al. (2016b), and MobileNetV2 Sandler et al. (2018) models 1https://tiny-imagenet.herokuapp.com/ with fixed and non-fixed class vectors. All models were trained using stochastic gradient descent with momentum. We used the standard normalization and data augmentation techniques. Due to space limitations, the values of the hyperparameters used for training the models can be found under our code repository. We normalized the randomly drawn, fixed class representation vectors by dividing them with their l2-norm. All reported results are the averaged results of 3 runs. 4.2 GENERALIZATION For measuring how well the models were able to generalize to the learned concepts, we evaluated them on images containing objects from the same target classes appearing in their training dataset. For evaluating the models trained on STL-10 and CIFAR-100, we manually collected 2000 and 6000 images ,respectively, from the publicly available dataset Open Images V4 Krasin et al. (2017). For CIFAR-10 we used the CIFAR-10.1 dataset Recht et al. (2018). All collected sets contain an equal number of images for each class. We omitted models trained on Tiny ImageNet from the evaluation since we were not able to collect images for all classes appearing in this set. Table 5 summarizes the results for all the models. Results suggest that excluding the class representation vectors from training, does not decrease the generalization to learned concepts. 4.3 ROBUSTNESS TO CORRUPTIONS Next, we verified that excluding the class vectors from training did not decrease the model’s robustness to image corruptions. For this we apply three types of algorithmically generated corruptions on the test set and evaluate the accuracy of the models on these sets. The corruptions we apply are impulse-noise, JPEG compression, and de-focus blur. Corruptions are generated using Jung (2018), and available under our repository. Results, as shown in Table 6, suggest that randomly drawn fixed class vectors allow models to be highly robust to image corruptions. 5 CONCLUSION In this paper, we propose randomly drawing the parameters of the classification layer and excluding them from training. We showed that by this, the inter-class separability, intra-class compactness, and the overall accuracy of the model can improve when maximizing the dot-product or the cosine similarity between the image representation and the class vectors. We analyzed the cause that prevents the non-fixed cosine-maximization models from converging. We also presented the generalization abilities of the fixed and not-fixed classification layer.
1. What is the focus of the paper regarding classification layer improvement? 2. What are the strengths of the proposed method, particularly in representation learning? 3. What are the weaknesses of the paper, especially in terms of experimental results? 4. Do you have any concerns about the effectiveness of the proposed method compared to other state-of-the-art methods? 5. What additional experiments should be conducted to validate the performance of the proposed model?
Review
Review This paper proposed a classification layer by randomizing the class representation vectors. This paper first analyses the class vector distributions between different training strategies, and then proposed the randomized class vector to improve the representation learning performance. The proposed model is further extended and analyzed for the fixed cosine-similarity maximization setting. The experiments demonstrate the effectiveness of the proposed method compared with the basic/vanilla baselines. Pros: The motivation of this paper is comprehensive. Some quantitative and visual experimental results introduced the motivation of the proposed model. The randomization weights also provide a novel view for solving more machine learning problems. This is a good point. Cons: My main concern is the experimental results. The experiments are mainly done for evaluating fixed and non-fixed models without any other state-of-the-art methods, and the exact performance is considerably low compared with other state-of-the-art methods. To this end, it is hard to confirm the effectiveness of the model. For example, 1) even though the visualization results are good (Figure 3), it does not mean the final performance is better. 2) the original model could be over-fitting, and a random and fixed weight layer could be considered as a regularizer. There are some experiments should be done: 1) Compare this method with relevant methods such as NormFace and ArcFace to proof the effectiveness of this approach. 2) Compared with exact performance in face relevant datasets and compared with other SOTA methods.
ICLR
Title Anytime Neural Network: a Versatile Trade-off Between Computation and Accuracy Abstract We present an approach for anytime predictions in deep neural networks (DNNs). For each test sample, an anytime predictor produces a coarse result quickly, and then continues to refine it until the test-time computational budget is depleted. Such predictors can address the growing computational problem of DNNs by automatically adjusting to varying test-time budgets. In this work, we study a general augmentation to feed-forward networks to form anytime neural networks (ANNs) via auxiliary predictions and losses. Specifically, we point out a blindspot in recent studies in such ANNs: the importance of high final accuracy. In fact, we show on multiple recognition data-sets and architectures that by having near-optimal final predictions in small anytime models, we can effectively double the speed of large ones to reach corresponding accuracy level. We achieve such speed-up with simple weighting of anytime losses that oscillate during training. We also assemble a sequence of exponentially deepening ANNs, to achieve both theoretically and practically near-optimal anytime results at any budget, at the cost of a constant fraction of additional consumed budget. 1 INTRODUCTION In recent years, the accuracy in visual recognition tasks has been greatly improved by increasingly complex convolutional neural networks, from AlexNet (Krizhevsky et al., 2012) and VGG (Simonyan & Zisserman, 2015), to ResNet (He et al., 2016), ResNeXt (Xie et al., 2017), and DenseNet (Huang et al., 2017b). However, the number of applications that require latency sensitive responses is growing rapidly. Furthermore, their test-time computational budget can often. E.g., autonomous vehicles require real-time object detection, but the required detection speed depends on the vehicle speed; web servers need to meet varying amount of data and user requests throughput through out a day. Thus, it can be difficult for such applications to choose between slow predictors with high accuracy and fast predictors with low accuracy. In many cases, this dilemma can be resolved by an anytime predictor (Horvitz, 1987; Boddy & Dean, 1989; Zilberstein, 1996), which, for each test sample, produces a fast and crude initial prediction and continues to refine it as budget allows, so that at any test-time budget, the anytime predictor has a valid result for the sample, and the more budget is spent, the better the prediction is. In this work1, we focus on the anytime prediction problem in neural networks. We follow the recent works (Lee et al., 2015; Xie & Tu, 2015; Zamir et al., 2017; Huang et al., 2017a) to append auxiliary predictions and losses in feed-forward networks for anytime predictions, and train them jointly end-to-end. However, we note that the existing methods all put only a small fraction of the total weightings to the final prediction, and as a result, large anytime models are often only as accurate as much smaller non-anytime models, because the accuracy gain is so costly in DNNs, as demonstrated in Fig. 1a. We address this problem with a novel and simple oscillating weightings of the losses, and will show in Sec. 3 that our small anytime models with near-optimal final predictions can effectively speed up two times large ones without them, on multiple data-sets, including ILSVRC (Russakovsky et al., 2015), and on multiple models, including the very recent Multi-ScaleDenseNets (MSDnets) (Huang et al., 2017a). Observing that the proposed training techniques lead to ANNs that are near-optimal in late predictions but are not as accurate in the early predictions, we 1For the full paper, see https://arxiv.org/abs/1708.06832 assemble ANNs of exponentially increasing depths to dedicate early predictions to smaller networks, while only delaying large networks by a constant fraction of additional test-time budgets. 2 METHODS As illustrated in Fig. 1b, given a sample (x, y) ∼ D, the initial feature map x0 is set to x, and the subsequent feature transformations f1, f2, ..., fL generate a sequence of intermediate features xi = fi(xi−1; θi) for i ≥ 1 using parameter θi. Each feature map xi can then produce an auxiliary prediction ŷi using a prediction layer gi: ŷi = gi(xi;wi) with parameter wi. Each auxiliary prediction ŷi then incurs an expected loss `i := E(x,y)∼D[`(y, ŷi)]. We call such an augmented network as an Anytime Neural Network (ANN). Let the parameters of the full ANN be θ = (θ1, w1, ..., θL, wL). The most common way to optimize these losses, `1, ..., `L, end-to-end is to optimize them in a weighted sum minθ ∑L i=1Bi`i(θ), where {Bi}i form the weight scheme for the losses. Alternating SIEVE weights. Three experimental observations lead to our proposed SIEVE weight scheme. First, the existing weights, CONST (Lee et al., 2015; Xie & Tu, 2015; Huang et al., 2017a), and LINEAR (Zamir et al., 2017) both incur more than 10% relative increase in final test errors, which effectively slow down anytime models multiple times. Second, we found that a large weight can improve a neighborhood of losses thanks to the high correlation among neighboring losses. Finally, keeping a fixed weighting may lead to solutions where the sum of the gradients are zero, but the individual gradients are non-zero. The proposed SIEVE scheme has half of the total weights in the final loss, so that the final gradient can outweigh other gradients when all loss gradients have equal two-norms. It also have uneven weights in early losses to let as many losses to be near large weights as possible. Formally for L losses, we first add to BbL2 e one unit of weight, where b•e means rounding. We then add one unit to each Bb kL4 e for k = 1, 2, 3, and then to each Bb kL8 e for k = 1, 2, ..., 7, and so on, until all predictors have non-zero weights. We finally normalize Bi so that ∑L−1 i=1 Bi = 1, and set BL = 1. During each training iteration, we also sample proportional to the Bi a layer i, and add temporarily to the total loss BL`i so as to oscillate the weights to avoid spurious solutions. We call ANNs with alternating weights as alternating ANNs (AANNs). Though the proposed techinques are heuristics, they effectively speed up anytime models multiple times as shown in Sec. 3. We hope our experimental results can inspire, and set baselines for, future principled approaches. EANN. Since AANNs put high weights in the final layer, they trade early accuracy for the late ones. We leverage this effect to improving early predictions of large ANNs: we propose to form a sequence of ANNs whose depths grow exponentially (EANN). By dedicating early predictions to small networks, EANN can achieve better early results. Furthermore, if the largest model has L depths, we only compute logL small networks before the final one, and the total cost of the small networks is only a constant fraction of the final one. Hence, we only consume a constant fraction of additional test-time budget. Fig 1c shows how an EANN of the exponential base b = 2 works at test-time. The EANN sequentially computes the ANNs, and only outputs an anytime result if the current result is better than previous ones in validation. Formally, if we assume that each ANN has near-optimal results after 1b of its layers, then we can prove that for any budget B, the EANN can achieve near-optimal predictions for budget B after spending C ×B total budgets. Furthermoe, for large b, EB∼uniform(1,L)[C] ≤ 1− 12b + 1+ln(b) b−1 → 1, and supB C = 2 + 1 b−1 → 2. 3 KEY EXPERIMENTS We present two key results: (1) small anytime models with SIEVE can outperform large ones with CONST, and (2) EANNs can improve early accuracy, but cost a constant fraction of extra budgets. SIEVE vs. CONST of double costs. In Fig. 2a and Fig. 2b, we compare SIEVE and CONST on ANNs that are based on ResNets on CIFAR100 (Krizhevsky, 2009) and ILSVRC (Russakovsky et al., 2015). The networks with CONST have double the depths as those with SIEVE. We observe that SIEVE leads to the same final error rates as CONST of double the costs, but does so much faster. The two schemes also have similar early performance. Hence, SIEVE effectively speed up the predictions of CONST by about two times. In Fig. 2c, we experiment with the very recent Multi-Scale-DenseNets (MSDNets) (Huang et al., 2017a), which are specifically modified from the recently popular DenseNets (Huang et al., 2017b) to produce the state-of-the-art anytime predictions. We again observe that by improving the final anytime prediction of the smallest MSDNet26 without sacrificing too much early predictions, we make MSDNet26 effectively a sped-up version of MSDNet36 and MSDNet41. EANN vs. ANNs and OPT. In Fig. 2d, we assemble ResNet-ANNs of 45, 81 and 153 conv layers to form EANNs. We compare the EANNs against the parallel OPT, which is from running regular networks of various depths in parallel. We observe that EANNs are able to significantly reduce the early errors of ANNs, but reach the final error rate later. Furthermore, ANNs with more accurate final predictions using SIEVE and EXP-LIN2 are able to outperform CONST and LINEAR, since whenever an ANN completes in an EANN, the final result is the best one for a long period of time. 2EXP-LIN is another proposed scheme that focuses more on the final loss. See the full paper for details.
1. How does the proposed anytime neural network (EANN) enable early predictions, and what are the key components of the approach? 2. What are the strengths of the paper, particularly in terms of the experimental results and the novel idea of adding auxiliary predictions? 3. Do you have any concerns or questions regarding the scalability of the EANN approach, especially in terms of depth and training complexity? 4. Are there any limitations or trade-offs in the EANN method that could be improved or optimized further? 5. What additional comparisons or alternatives would be useful to include in future experiments to validate the effectiveness of EANN?
Review
Review This paper proposes an anytime neural network, which can predict anytime while training. To achieve that, the model includes auxiliary predictions which can make early predictions. Specifically, the paper presents a loss weighting scheme that considers high correlation among nearby predictions, an oscillating loss weighting scheme for further improvement, and an ensemble of anytime neural networks. In the experiments, test error of the proposed model was shown to be comparable to the optimal one at each time budget. It is an interesting idea to add auxiliary predictions to enable early predictions and the experimental results look promising as they are close to optimal at each time budget. 1. In Section 3.2, there are some discussions on the parallel computations of EANN. The parallel training is not clear to me and it would be great to have more explanation on this with examples. 2. It seems that EANN is not scalable because the depth is increasing exponentially. For example, given 10 machines, the model with the largest depth would have 2^10 layers, which is difficult to train. It would be great to discuss this issue. 3. In the experiments, it would be great to add a few alternatives to be compared for anytime predictions.
ICLR
Title Anytime Neural Network: a Versatile Trade-off Between Computation and Accuracy Abstract We present an approach for anytime predictions in deep neural networks (DNNs). For each test sample, an anytime predictor produces a coarse result quickly, and then continues to refine it until the test-time computational budget is depleted. Such predictors can address the growing computational problem of DNNs by automatically adjusting to varying test-time budgets. In this work, we study a general augmentation to feed-forward networks to form anytime neural networks (ANNs) via auxiliary predictions and losses. Specifically, we point out a blindspot in recent studies in such ANNs: the importance of high final accuracy. In fact, we show on multiple recognition data-sets and architectures that by having near-optimal final predictions in small anytime models, we can effectively double the speed of large ones to reach corresponding accuracy level. We achieve such speed-up with simple weighting of anytime losses that oscillate during training. We also assemble a sequence of exponentially deepening ANNs, to achieve both theoretically and practically near-optimal anytime results at any budget, at the cost of a constant fraction of additional consumed budget. 1 INTRODUCTION In recent years, the accuracy in visual recognition tasks has been greatly improved by increasingly complex convolutional neural networks, from AlexNet (Krizhevsky et al., 2012) and VGG (Simonyan & Zisserman, 2015), to ResNet (He et al., 2016), ResNeXt (Xie et al., 2017), and DenseNet (Huang et al., 2017b). However, the number of applications that require latency sensitive responses is growing rapidly. Furthermore, their test-time computational budget can often. E.g., autonomous vehicles require real-time object detection, but the required detection speed depends on the vehicle speed; web servers need to meet varying amount of data and user requests throughput through out a day. Thus, it can be difficult for such applications to choose between slow predictors with high accuracy and fast predictors with low accuracy. In many cases, this dilemma can be resolved by an anytime predictor (Horvitz, 1987; Boddy & Dean, 1989; Zilberstein, 1996), which, for each test sample, produces a fast and crude initial prediction and continues to refine it as budget allows, so that at any test-time budget, the anytime predictor has a valid result for the sample, and the more budget is spent, the better the prediction is. In this work1, we focus on the anytime prediction problem in neural networks. We follow the recent works (Lee et al., 2015; Xie & Tu, 2015; Zamir et al., 2017; Huang et al., 2017a) to append auxiliary predictions and losses in feed-forward networks for anytime predictions, and train them jointly end-to-end. However, we note that the existing methods all put only a small fraction of the total weightings to the final prediction, and as a result, large anytime models are often only as accurate as much smaller non-anytime models, because the accuracy gain is so costly in DNNs, as demonstrated in Fig. 1a. We address this problem with a novel and simple oscillating weightings of the losses, and will show in Sec. 3 that our small anytime models with near-optimal final predictions can effectively speed up two times large ones without them, on multiple data-sets, including ILSVRC (Russakovsky et al., 2015), and on multiple models, including the very recent Multi-ScaleDenseNets (MSDnets) (Huang et al., 2017a). Observing that the proposed training techniques lead to ANNs that are near-optimal in late predictions but are not as accurate in the early predictions, we 1For the full paper, see https://arxiv.org/abs/1708.06832 assemble ANNs of exponentially increasing depths to dedicate early predictions to smaller networks, while only delaying large networks by a constant fraction of additional test-time budgets. 2 METHODS As illustrated in Fig. 1b, given a sample (x, y) ∼ D, the initial feature map x0 is set to x, and the subsequent feature transformations f1, f2, ..., fL generate a sequence of intermediate features xi = fi(xi−1; θi) for i ≥ 1 using parameter θi. Each feature map xi can then produce an auxiliary prediction ŷi using a prediction layer gi: ŷi = gi(xi;wi) with parameter wi. Each auxiliary prediction ŷi then incurs an expected loss `i := E(x,y)∼D[`(y, ŷi)]. We call such an augmented network as an Anytime Neural Network (ANN). Let the parameters of the full ANN be θ = (θ1, w1, ..., θL, wL). The most common way to optimize these losses, `1, ..., `L, end-to-end is to optimize them in a weighted sum minθ ∑L i=1Bi`i(θ), where {Bi}i form the weight scheme for the losses. Alternating SIEVE weights. Three experimental observations lead to our proposed SIEVE weight scheme. First, the existing weights, CONST (Lee et al., 2015; Xie & Tu, 2015; Huang et al., 2017a), and LINEAR (Zamir et al., 2017) both incur more than 10% relative increase in final test errors, which effectively slow down anytime models multiple times. Second, we found that a large weight can improve a neighborhood of losses thanks to the high correlation among neighboring losses. Finally, keeping a fixed weighting may lead to solutions where the sum of the gradients are zero, but the individual gradients are non-zero. The proposed SIEVE scheme has half of the total weights in the final loss, so that the final gradient can outweigh other gradients when all loss gradients have equal two-norms. It also have uneven weights in early losses to let as many losses to be near large weights as possible. Formally for L losses, we first add to BbL2 e one unit of weight, where b•e means rounding. We then add one unit to each Bb kL4 e for k = 1, 2, 3, and then to each Bb kL8 e for k = 1, 2, ..., 7, and so on, until all predictors have non-zero weights. We finally normalize Bi so that ∑L−1 i=1 Bi = 1, and set BL = 1. During each training iteration, we also sample proportional to the Bi a layer i, and add temporarily to the total loss BL`i so as to oscillate the weights to avoid spurious solutions. We call ANNs with alternating weights as alternating ANNs (AANNs). Though the proposed techinques are heuristics, they effectively speed up anytime models multiple times as shown in Sec. 3. We hope our experimental results can inspire, and set baselines for, future principled approaches. EANN. Since AANNs put high weights in the final layer, they trade early accuracy for the late ones. We leverage this effect to improving early predictions of large ANNs: we propose to form a sequence of ANNs whose depths grow exponentially (EANN). By dedicating early predictions to small networks, EANN can achieve better early results. Furthermore, if the largest model has L depths, we only compute logL small networks before the final one, and the total cost of the small networks is only a constant fraction of the final one. Hence, we only consume a constant fraction of additional test-time budget. Fig 1c shows how an EANN of the exponential base b = 2 works at test-time. The EANN sequentially computes the ANNs, and only outputs an anytime result if the current result is better than previous ones in validation. Formally, if we assume that each ANN has near-optimal results after 1b of its layers, then we can prove that for any budget B, the EANN can achieve near-optimal predictions for budget B after spending C ×B total budgets. Furthermoe, for large b, EB∼uniform(1,L)[C] ≤ 1− 12b + 1+ln(b) b−1 → 1, and supB C = 2 + 1 b−1 → 2. 3 KEY EXPERIMENTS We present two key results: (1) small anytime models with SIEVE can outperform large ones with CONST, and (2) EANNs can improve early accuracy, but cost a constant fraction of extra budgets. SIEVE vs. CONST of double costs. In Fig. 2a and Fig. 2b, we compare SIEVE and CONST on ANNs that are based on ResNets on CIFAR100 (Krizhevsky, 2009) and ILSVRC (Russakovsky et al., 2015). The networks with CONST have double the depths as those with SIEVE. We observe that SIEVE leads to the same final error rates as CONST of double the costs, but does so much faster. The two schemes also have similar early performance. Hence, SIEVE effectively speed up the predictions of CONST by about two times. In Fig. 2c, we experiment with the very recent Multi-Scale-DenseNets (MSDNets) (Huang et al., 2017a), which are specifically modified from the recently popular DenseNets (Huang et al., 2017b) to produce the state-of-the-art anytime predictions. We again observe that by improving the final anytime prediction of the smallest MSDNet26 without sacrificing too much early predictions, we make MSDNet26 effectively a sped-up version of MSDNet36 and MSDNet41. EANN vs. ANNs and OPT. In Fig. 2d, we assemble ResNet-ANNs of 45, 81 and 153 conv layers to form EANNs. We compare the EANNs against the parallel OPT, which is from running regular networks of various depths in parallel. We observe that EANNs are able to significantly reduce the early errors of ANNs, but reach the final error rate later. Furthermore, ANNs with more accurate final predictions using SIEVE and EXP-LIN2 are able to outperform CONST and LINEAR, since whenever an ANN completes in an EANN, the final result is the best one for a long period of time. 2EXP-LIN is another proposed scheme that focuses more on the final loss. See the full paper for details.
1. What is the main contribution of the paper, and how does it differ from previous works in the field? 2. How effective are the proposed weighting schemes in improving the performance of the residual network, and how do they compare to other methods? 3. What are the limitations of the approach, and how might it be improved in future work? 4. How well did the authors justify their design choices, such as the choice of weighing schemes and the decision to ensemble the model? 5. Are there any concerns or questions about the experimental results presented in the paper?
Review
Review 1. Paper Summary This paper adds a separate network at every layer of a residual network that performs classification. They minimize the loss of every classifier using two proposed weighting schemes. They also ensemble this model. 2. High level paper The organization of this paper is a bit confusing. Two weighing schemes are introduced in Section 3.1, then the ensemble model is described in Section 3.2, then the weighing schemes are justified in Section 4.1. Overall this method is essentially an cascade where each cascade classifier is a residual block. Every input is passed through as many stages as possible until the budget is reached. While this model is likely quite useful in industrial settings, I don't think the model itself is wholly original. The authors have done extensive experiments evaluating their method in different settings. I would have liked to see a comparison with at least one other anytime method. I think it is slightly unfair to say that you are comparing with Xie & Tu, 2015 and Huang et al., 2017 just because they use the CONSTANT weighing schemes. 3. High level technical I have a few concerns: - Why does AANN+LINEAR nearly match the accuracy of EANN+SIEVE near 3e9 FLOPS in Figure 4b but EANN+LINEAR does not in Figure 4a? Shouldn't EANN+LINEAR be strictly better than AANN+LINEAR? - Why do the authors choose these specific weighing schemes? Section 4.1 is devoted to explaining this but it is still unclear to me. They talk about there being correlation between the predictors near the end of the model so they don't want to distribute weight near the final predictors but this general observation doesn't obviously lead to these weighing schemes, they still seem a bit adhoc. A few other comments: - Figure 3b seems to contain strictly less information than Figure 4a, I would remove Figure 3b and draw lines showing the speedup you get for one or two accuracy levels. Questions: - Section 3.1: "Such an ideal θ* does not exist in general and often does not exist in practice." Why is this the case? - Section 3.1: " In particular, spreading weights evenly as in (Lee et al., 2015) keeps all i away from their possible respective minimum" Why is this true? - Section 3.1: "Since we will evaluate near depth b3L/4e, and it is the center of L/2 low-weight layers, we increase it weight by 1/8." I am completely lost here, why do you do this? 4. Review summary Ultimately because the model itself resembles previous cascade models, the selected weighings have little justification, and there isn't a comparison with another anytime method, I think this paper isn't yet ready for acceptance at ICLR.
ICLR
Title Anytime Neural Network: a Versatile Trade-off Between Computation and Accuracy Abstract We present an approach for anytime predictions in deep neural networks (DNNs). For each test sample, an anytime predictor produces a coarse result quickly, and then continues to refine it until the test-time computational budget is depleted. Such predictors can address the growing computational problem of DNNs by automatically adjusting to varying test-time budgets. In this work, we study a general augmentation to feed-forward networks to form anytime neural networks (ANNs) via auxiliary predictions and losses. Specifically, we point out a blindspot in recent studies in such ANNs: the importance of high final accuracy. In fact, we show on multiple recognition data-sets and architectures that by having near-optimal final predictions in small anytime models, we can effectively double the speed of large ones to reach corresponding accuracy level. We achieve such speed-up with simple weighting of anytime losses that oscillate during training. We also assemble a sequence of exponentially deepening ANNs, to achieve both theoretically and practically near-optimal anytime results at any budget, at the cost of a constant fraction of additional consumed budget. 1 INTRODUCTION In recent years, the accuracy in visual recognition tasks has been greatly improved by increasingly complex convolutional neural networks, from AlexNet (Krizhevsky et al., 2012) and VGG (Simonyan & Zisserman, 2015), to ResNet (He et al., 2016), ResNeXt (Xie et al., 2017), and DenseNet (Huang et al., 2017b). However, the number of applications that require latency sensitive responses is growing rapidly. Furthermore, their test-time computational budget can often. E.g., autonomous vehicles require real-time object detection, but the required detection speed depends on the vehicle speed; web servers need to meet varying amount of data and user requests throughput through out a day. Thus, it can be difficult for such applications to choose between slow predictors with high accuracy and fast predictors with low accuracy. In many cases, this dilemma can be resolved by an anytime predictor (Horvitz, 1987; Boddy & Dean, 1989; Zilberstein, 1996), which, for each test sample, produces a fast and crude initial prediction and continues to refine it as budget allows, so that at any test-time budget, the anytime predictor has a valid result for the sample, and the more budget is spent, the better the prediction is. In this work1, we focus on the anytime prediction problem in neural networks. We follow the recent works (Lee et al., 2015; Xie & Tu, 2015; Zamir et al., 2017; Huang et al., 2017a) to append auxiliary predictions and losses in feed-forward networks for anytime predictions, and train them jointly end-to-end. However, we note that the existing methods all put only a small fraction of the total weightings to the final prediction, and as a result, large anytime models are often only as accurate as much smaller non-anytime models, because the accuracy gain is so costly in DNNs, as demonstrated in Fig. 1a. We address this problem with a novel and simple oscillating weightings of the losses, and will show in Sec. 3 that our small anytime models with near-optimal final predictions can effectively speed up two times large ones without them, on multiple data-sets, including ILSVRC (Russakovsky et al., 2015), and on multiple models, including the very recent Multi-ScaleDenseNets (MSDnets) (Huang et al., 2017a). Observing that the proposed training techniques lead to ANNs that are near-optimal in late predictions but are not as accurate in the early predictions, we 1For the full paper, see https://arxiv.org/abs/1708.06832 assemble ANNs of exponentially increasing depths to dedicate early predictions to smaller networks, while only delaying large networks by a constant fraction of additional test-time budgets. 2 METHODS As illustrated in Fig. 1b, given a sample (x, y) ∼ D, the initial feature map x0 is set to x, and the subsequent feature transformations f1, f2, ..., fL generate a sequence of intermediate features xi = fi(xi−1; θi) for i ≥ 1 using parameter θi. Each feature map xi can then produce an auxiliary prediction ŷi using a prediction layer gi: ŷi = gi(xi;wi) with parameter wi. Each auxiliary prediction ŷi then incurs an expected loss `i := E(x,y)∼D[`(y, ŷi)]. We call such an augmented network as an Anytime Neural Network (ANN). Let the parameters of the full ANN be θ = (θ1, w1, ..., θL, wL). The most common way to optimize these losses, `1, ..., `L, end-to-end is to optimize them in a weighted sum minθ ∑L i=1Bi`i(θ), where {Bi}i form the weight scheme for the losses. Alternating SIEVE weights. Three experimental observations lead to our proposed SIEVE weight scheme. First, the existing weights, CONST (Lee et al., 2015; Xie & Tu, 2015; Huang et al., 2017a), and LINEAR (Zamir et al., 2017) both incur more than 10% relative increase in final test errors, which effectively slow down anytime models multiple times. Second, we found that a large weight can improve a neighborhood of losses thanks to the high correlation among neighboring losses. Finally, keeping a fixed weighting may lead to solutions where the sum of the gradients are zero, but the individual gradients are non-zero. The proposed SIEVE scheme has half of the total weights in the final loss, so that the final gradient can outweigh other gradients when all loss gradients have equal two-norms. It also have uneven weights in early losses to let as many losses to be near large weights as possible. Formally for L losses, we first add to BbL2 e one unit of weight, where b•e means rounding. We then add one unit to each Bb kL4 e for k = 1, 2, 3, and then to each Bb kL8 e for k = 1, 2, ..., 7, and so on, until all predictors have non-zero weights. We finally normalize Bi so that ∑L−1 i=1 Bi = 1, and set BL = 1. During each training iteration, we also sample proportional to the Bi a layer i, and add temporarily to the total loss BL`i so as to oscillate the weights to avoid spurious solutions. We call ANNs with alternating weights as alternating ANNs (AANNs). Though the proposed techinques are heuristics, they effectively speed up anytime models multiple times as shown in Sec. 3. We hope our experimental results can inspire, and set baselines for, future principled approaches. EANN. Since AANNs put high weights in the final layer, they trade early accuracy for the late ones. We leverage this effect to improving early predictions of large ANNs: we propose to form a sequence of ANNs whose depths grow exponentially (EANN). By dedicating early predictions to small networks, EANN can achieve better early results. Furthermore, if the largest model has L depths, we only compute logL small networks before the final one, and the total cost of the small networks is only a constant fraction of the final one. Hence, we only consume a constant fraction of additional test-time budget. Fig 1c shows how an EANN of the exponential base b = 2 works at test-time. The EANN sequentially computes the ANNs, and only outputs an anytime result if the current result is better than previous ones in validation. Formally, if we assume that each ANN has near-optimal results after 1b of its layers, then we can prove that for any budget B, the EANN can achieve near-optimal predictions for budget B after spending C ×B total budgets. Furthermoe, for large b, EB∼uniform(1,L)[C] ≤ 1− 12b + 1+ln(b) b−1 → 1, and supB C = 2 + 1 b−1 → 2. 3 KEY EXPERIMENTS We present two key results: (1) small anytime models with SIEVE can outperform large ones with CONST, and (2) EANNs can improve early accuracy, but cost a constant fraction of extra budgets. SIEVE vs. CONST of double costs. In Fig. 2a and Fig. 2b, we compare SIEVE and CONST on ANNs that are based on ResNets on CIFAR100 (Krizhevsky, 2009) and ILSVRC (Russakovsky et al., 2015). The networks with CONST have double the depths as those with SIEVE. We observe that SIEVE leads to the same final error rates as CONST of double the costs, but does so much faster. The two schemes also have similar early performance. Hence, SIEVE effectively speed up the predictions of CONST by about two times. In Fig. 2c, we experiment with the very recent Multi-Scale-DenseNets (MSDNets) (Huang et al., 2017a), which are specifically modified from the recently popular DenseNets (Huang et al., 2017b) to produce the state-of-the-art anytime predictions. We again observe that by improving the final anytime prediction of the smallest MSDNet26 without sacrificing too much early predictions, we make MSDNet26 effectively a sped-up version of MSDNet36 and MSDNet41. EANN vs. ANNs and OPT. In Fig. 2d, we assemble ResNet-ANNs of 45, 81 and 153 conv layers to form EANNs. We compare the EANNs against the parallel OPT, which is from running regular networks of various depths in parallel. We observe that EANNs are able to significantly reduce the early errors of ANNs, but reach the final error rate later. Furthermore, ANNs with more accurate final predictions using SIEVE and EXP-LIN2 are able to outperform CONST and LINEAR, since whenever an ANN completes in an EANN, the final result is the best one for a long period of time. 2EXP-LIN is another proposed scheme that focuses more on the final loss. See the full paper for details.
1. What are the main contributions and proposed approaches of the paper regarding anytime prediction in neural networks? 2. What are the strengths and weaknesses of the reviewed paper? 3. Do the proposed heuristics have a solid scientific foundation, or are they just arbitrary choices? 4. How does the reviewer assess the significance and performance of the proposed method compared to other works in the field? 5. Are there any concerns or suggestions regarding the presentation and visualization of the results?
Review
Review This paper aims to endow neural networks the ability to produce anytime prediction. The authors propose several heuristics to reweight and oscillate the loss to improve the anytime performance. In addition, they propose to use a sequence of exponentially deepening anytime neural networks to reduce the performance gap for early classifiers. The proposed approaches are validated on two image classification datasets. Pros: - The paper is well written and easy to follow. - It addresses an interesting problem with reasonable approaches. Cons: - The loss reweighting and oscillating schemes appear to be just heuristics. It is not clear what the scientific contributions are. - I do not fully agree with the explanation given for the “alternating weights”. If the joint loss leads to zero gradient for some weights, then why would you consider it problematic? - There are few baselines compared in the result section. In addition, the proposed method underperforms the MSDNet (Huang et al., 2017) on ILSVRC2012. - The EANN is similar to the method used by Adaptive Networks (Bolukbasi et al., 2017), and the baseline “Ensemble of ResNets (varying depth)” in the MSDNet paper. - Could you show the error bar In Figure 2(a)? Usually an error difference less than 0.5% on CIFAR-100 is not considered as significant. - I’m not convinced that AANN really works significantly better than ANN according to the results in Table 1(a). It seems that ANN still outperform AANN in many cases. - I would suggest to show the results in Table 1(b) with a figure.
ICLR
Title Learning Rationalizable Equilibria in Multiplayer Games Abstract A natural goal in multi-agent learning is to learn rationalizable behavior, where players learn to avoid any Iteratively Dominated Action (IDA). However, standard no-regret based equilibria-finding algorithms could take exponential samples to find such rationalizable strategies. In this paper, we first propose a simple yet sample-efficient algorithm for finding a rationalizable action profile in multi-player general-sum games under bandit feedback, which substantially improves over the results of Wu et al. (2021). We further develop algorithms with the first efficient guarantees for learning rationalizable Coarse Correlated Equilibria (CCE) and Correlated Equilibria (CE). Our algorithms incorporate several novel techniques to guarantee the elimination of IDA and no (swap-)regret simultaneously, including a correlated exploration scheme and adaptive learning rates, which may be of independent interest. We complement our results with a sample complexity lower bound showing the sharpness of our guarantees. 1 INTRODUCTION A common objective in multi-agent learning is to find various equilibria, such as Nash equilibria (NE), correlated equilibria (CE) and coarse correlated equilibria (CCE). Generally speaking, a player in equilibrium lacks incentive to deviate assuming conformity of other players to the same equilibrium. Equilibrium learning has been extensively studied in the literature of game theory and online learning, and no-regret based learners can provably learn approximate CE and CCE with both computational and statistical efficiency (Stoltz, 2005; Cesa-Bianchi & Lugosi, 2006). However, not all equilibria are created equal. As shown by Viossat & Zapechelnyuk (2013), a CCE can be entirely supported on dominated actions—actions that are worse off than some other strategy in all circumstances—which rational agents should apparently never play. Approximate CE also suffers from a similar problem. As shown by Wu et al. (2021, Theorem 1), there are examples where an ϵ-CE always plays iteratively dominated actions—actions that would be eliminated when iteratively deleting strictly dominated actions—unless ϵ is exponentially small. It is also shown that standard no-regret algorithms are indeed prone to finding such seemingly undesirable solutions (Wu et al., 2021). The intrinsic reason behind this is that CCE and approximate CE may not be rationalizable, and existing algorithms can indeed fail to find rationalizable solutions. Different from equilibria notions, rationalizability (Bernheim, 1984; Pearce, 1984) looks at the game from the perspective of a single player without knowledge of the actual strategies of other players, and only assumes common knowledge of their rationality. A rationalizable strategy will avoid strictly dominated actions, and assuming other players have also eliminated their dominated actions, iteratively avoid strictly dominated actions in the subgame. Rationalizability is a central solution concept in game theory (Osborne & Rubinstein, 1994) and has found applications in auctions (Battigalli & Siniscalchi, 2003) and mechanism design (Bergemann et al., 2011). If an (approximate) equilibrium only employs rationalizable actions, it would prevent irrational behavior such as playing dominated actions. Such equilibria are arguably more reasonable than ∗Equal contribution. unrationalizable ones, and constitute a stronger solution concept. This motivates us to consider the following open question: Can we efficiently learn equilibria that are also rationalizable? Despite its fundamental role in multi-agent reasoning, rationalizability is rarely studied from a learning perspective until recently, with Wu et al. (2021) giving the first algorithm for learning rationalizable strategies from bandit feedback. However, the problem of learning rationalizable CE and CCE remains a challenging open problem. Due to the existence of unrationalizable equilibria, running standard CE or CCE learners will not guarantee rationalizable solutions. On the other hand, one cannot hope to first identify all rationalizable actions and then find an equilibrium on the subgame, since even determining whether an action is rationalizable requires exponentially many samples (see Proposition 2). Therefore, achieving rationalizability and approximate equilibria simultaneously is nontrivial and presents new algorithmic challenges. In this work, we address the challenges above and give a positive answer to our main question. Our contributions can be summarized as follows: • As a first step, we provide a simple yet sample-efficient algorithm for identifying a ∆- rationalizable 1 action profile under bandit feedback, using only Õ ( LNA ∆2 ) 2 samples in normal- form games with N players, A actions per player and a minimum elimination length of L. This greatly improves the result of Wu et al. (2021) and is tight up to logarithmic factors when L = O(1). • Using the above algorithm as a subroutine, we develop exponential weights based algorithms that can provably find ∆-rationalizable ϵ-CCE using Õ ( LNA ∆2 + NA ϵ2 ) samples, and ∆-rationalizable ϵ-CE using Õ ( LNA ∆2 + NA2 min{ϵ2,∆2} ) samples. To the best of our knowledge, these are the first guarantees for learning rationalizable approximate CCE and CE. • We also provide reduction schemes that find ∆-rationalizable ϵ-CCE/CE using black-box algorithms for ϵ-CCE/CE. Despite having slightly worse rates, these algorithms can directly leverage the progress in equilibria finding, which may be of independent interest. 1.1 RELATED WORK Rationalizability and iterative dominance elimination. Rationalizability (Bernheim, 1984; Pearce, 1984) is a notion that captures rational reasoning in games and relaxes Nash Equilibrium. Rationalizability is closely related to the iterative elimination of dominated actions, which has been a focus of game theory research since the 1950s (Luce & Raiffa, 1957). It can be shown that an action is rationalizable if and only if it survives iterative elimination of strictly dominated actions3 (Pearce, 1984). There is also experimental evidence supporting iterative elimination of dominated strategies as a model of human reasoning (Camerer, 2011). Equilibria learning in games. There is a rich literature on applying online learning algorithms to learning equilibria in games. It is well-known that if all agents have no-regret, the resulting empirical average would be an ϵ-CCE (Young, 2004), while if all agents have no swap-regret, the resulting empirical average would be an ϵ-CE (Hart & Mas-Colell, 2000; Cesa-Bianchi & Lugosi, 2006). Later work continuing this line of research include those with faster convergence rates (Syrgkanis et al., 2015; Chen & Peng, 2020; Daskalakis et al., 2021), last-iterate convergence guarantees (Daskalakis & Panageas, 2018; Wei et al., 2020), and extension to extensive-form games (Celli et al., 2020; Bai et al., 2022b;a; Song et al., 2022) and Markov games (Song et al., 2021; Jin et al., 2021). Computational and learning aspect of rationalizability. Despite its conceptual importance, rationalizability and iterative dominance elimination are not well studied from a computational or learning perspective. For iterative strict dominance elimination in two-player games, Knuth et al. (1988) provided a cubic-time algorithm and proved that the problem is P-complete. The weak dominance version of the problem is proven to be NP-complete by Conitzer & Sandholm (2005). 1An action is ∆-rationalizable if it survives iterative elimination of ∆-dominated actions; c.f. Definition 1. 2Throughout this paper, we use Õ to suppress logarithmic factors in N , A, L, 1 ∆ , 1 δ , and 1 ϵ . 3For this equivalence to hold, we need to allow dominance by mixed strategies, and correlated beliefs when there are more than two players. These conditions are met in the setting of this work. Hofbauer & Weibull (1996) showed that in a class of learning dynamics which includes replicator dynamics — the continuous-time variant of Follow-The-Regularzied-Leader (FTRL), all iteratively strictly dominated actions vanish over time, while Mertikopoulos & Moustakas (2010) proved similar results for stochastic replicator dynamics; however, neither work provides finite-time guarantees. Cohen et al. (2017) proved that Hedge eliminates dominated actions in finite time, but did not extend their results to the more challenging case of iteratively dominated actions. The most related work in literature is the work on learning rationalizable actions by Wu et al. (2021), who proposed the Exp3-DH algorithm to find a strategy mostly supported on rationalizable actions with a polynomial rate. Our Algorithm 1 accomplishes the same task with a faster rate, while our Algorithms 2 & 3 deal with the more challenging problems of finding ϵ-CE/CCE that are also rationalizable. Although Exp3-DH is based on a no-regret algorithm, it does not enjoy regret or weighted regret guarantees and thus does not provably find rationalizable equilibria. 2 PRELIMINARY An N -player normal-form game involves N players whose action space are denoted by A = A1× · · ·×AN , and is defined by utility functions u1, · · · , uN :A → [0, 1]. Let A = maxi∈[N ] |Ai| denote the maximum number of actions per player, xi denote a mixed strategy of the i-th player (i.e., a distribution over Ai) and x−i denote a (correlated) mixed strategy of the other players (i.e., a distribution over ∏ j ̸=iAj). We further denote ui(xi, x−i) := Eai∼xi,a−i∼x−iui(ai, a−i). We use ∆(S) to denote a distribution over the set S. Learning from bandit feedback We consider the bandit feedback setting where in each round, each player i ∈ [N ] chooses an action ai ∈ Ai, and then observes a random feedback Ui ∈ [0, 1] such that E[Ui|a1, a2, · · · , an] = ui(a1, a2, · · · , an). 2.1 RATIONALIZABILITY An action a ∈ Ai is said to be rationalizable if it could be the best response to some (possibly correlated) belief of other players’ strategies, assuming that they are also rational. In other words, the set of rationalizable actions is obtained by iteratively removing actions that could never be a best response. For finite normal-form games, this is in fact equivalent to the iterative elimination of strictly dominated actions4 (Osborne & Rubinstein, 1994, Lemma 60.1). Definition 1 (∆-Rationalizability). 5 Define E1 := ⋃N i=1 {a ∈ Ai : ∃x ∈ ∆(Ai),∀a−i, ui(a, a−i) ≤ ui(x, a−i)−∆} , which is the set of ∆-dominated actions for all players. Further define El := ⋃N i=1 {a ∈ Ai : ∃x ∈ ∆(Ai),∀a−i s.t. a−i ∩ El−1 = ∅, ui(a, a−i) ≤ ui(x, a−i)−∆} , which is the set of actions that would be eliminated by the l-th round. Define L = inf{l : El+1 = El} as the minimum elimination length, and EL as the set of ∆-iteratively dominated actions (∆-IDAs). Actions in ∪ni=1Ai \ EL are said to be ∆-rationalizable. Notice that E1 ⊆ · · · ⊆ EL = EL+1. Here ∆ plays a similar role as the reward gap for best arm identification in stochastic multi-armed bandits. We will henceforth use ∆-rationalizability and survival of L rounds of iterative dominance elimination (IDE) interchangeably6. Since one cannot eliminate all the actions of a player, |EL| ≥ N , which further implies L ≤ N(A− 1) < NA. 2.2 EQUILIBRIA IN GAMES We consider three common learning objectives, namely Nash Equilibrium (NE), Correlated Equilibrium (CE) and Coarse Correlated Equilibrium (CCE). 4See, e.g., the Diamond-In-the-Rough (DIR) games in Wu et al. (2021, Definition 2) for a concrete example of iterative dominance elimination. 5Here we slightly abuse the notation and use ∆ to refer to both the gap and the probability simplex. 6Alternatively one can also define ∆-rationalizability by the iterative elimination of actions that are never ∆-best response, which is mathematically equivalent to Definition 1 (see Appendix A.1). Definition 2 (Nash Equilibrium). A strategy profile (x1, · · · , xN ) is an ϵ-Nash equilibrium if ui(xi, x−i) ≥ ui(a, x−i)− ϵ,∀a ∈ Ai,∀i ∈ [N ]. Definition 3 (Correlated Equilibrium). A correlated strategy Π ∈ ∆(A) is an ϵ-correlated equilibrium if ∀i ∈ [N ],∀ϕ : Ai → Ai,∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(ai, a−i) ≥ ∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(ϕ(ai), a−i)− ϵ. Definition 4 (Coarse Correlated Equilibrium). A correlated strategy Π ∈ ∆(A) is an ϵ-CCE if ∀i ∈ [N ],∀a′ ∈ Ai,∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(ai, a−i) ≥ ∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(a ′, a−i)− ϵ. When ϵ = 0, the above definitions give exact Nash equilibrium, correlated equilibrium, and coarse correlated equilibrium, respectively. It is well known that ϵ-NE are ϵ-CE, and ϵ-CE are ϵ-CCE. Furthermore, we call an ϵ-CCE/CE that only plays ∆-rationalizable actions a.s. a ∆-rationalizable ϵ-CCE/CE. 2.3 CONNECTION BETWEEN EQUILIBRIA AND RATIONALIZABILITY It is known that all actions in the support of an exact CE are rationalizable (Osborne & Rubinstein, 1994, Lemma 56.2). However, one can easily construct an exact CCE that is supported on dominated (hence, unrationalizable) actions (see e.g. Viossat & Zapechelnyuk (2013, Fig. 3)). One might be tempted to suggest that running a CE solver immediately finds a CE (and hence CCE) that is also rationalizable. However, the connection between CE and rationalizability becomes quite different when it comes to approximate equilibria, which are inevitable in the presence of noise. As shown by Wu et al. (2021, Theorem 1), an ϵ-CE can be entirely supported on iteratively dominated action, unless ϵ = O(2−A). In other words, rationalizability is not guaranteed by running an approximate CE solver unless with an extremely high accuracy. Therefore, finding ϵ-CE and CCE that are simultaneously rationalizable remains a challenging open problem. Since NE is a subset of CE, all actions in the support of an (exact) NE would also be rationalizable. Unlike approximate CE, for ϵ < poly(∆, 1/N, 1/A)), one can show that any ϵ-Nash equilibrium is still mostly supported on rationalizable actions. Proposition 1. If x∗ = (x∗1, · · · , x∗N ) is an ϵ-Nash with ϵ < ∆ 2 24N2A , ∀i, Pra∼x∗i [a ∈ EL] ≤ 2Lϵ ∆ . Therefore, for two-player zero-sum games, it is possible to run an approximate NE solver and automatically find a rationalizable ϵ-NE. However, this method will induce a rather slow rate7, and we will provide a much more efficient algorithm for finding rationalizable ϵ-NE in Section 4. 3 LEARNING RATIONALIZABLE ACTION PROFILES In order to learn a rationalizable CE/CCE, one might suggest identifying the set of all rationalizable actions, and then learn CE or CCE on this subgame. Unfortunately, as shown by Proposition 2, even the simpler problem of deciding whether one single action is rationalizable is statistically hard. Proposition 2. For ∆ < 0.1, any algorithm that correctly decides whether an action is ∆- rationalizable with 0.9 probability needs Ω(AN−1∆−2) samples. This negative result motivates us to consider an easier task: can we at least find one rationalizable action profile sample-efficiently? Formally, we say a action profile (a1, . . . , aN ) is rationalizable if for all i ∈ [N ], ai is a rationalizable action. This is arguably one of the most fundamental tasks regarding rationalizability. For mixed-strategy dominance solvable games (Alon et al., 2021), the unique rationalizable action profile will be the unique NE and also the unique CE of the game. Therefore this easier task per se is still of practical importance. In this section we answer this question in the affirmative. We provide a sample-efficient algorithm which finds a rationalizable action profile using only Õ ( LNA ∆2 ) samples. This algorithm will also serve as an important subroutine for algorithms finding rationalizable CCE/CE in the later sections. 7For two-player zero-sum games, the marginals of any CCE is an NE so NE can be found efficiently. This is not true for general games, where finding NE is computationally hard and takes Ω(2N ) samples. Algorithm 1 Iterative Best Response 1: Initialization: choose a(0)i ∈ Ai arbitrarily for all i ∈ [N ] 2: for l = 1, · · · , L do 3: for i ∈ [N ] do 4: For all a ∈ Ai, play (a, a(l−1)−i ) for M times, compute player i’s average payoff ûi(a, a (l−1) −i ) 5: Set a(l)i ← argmaxa∈Ai ûi(a, a (l−1) −i ) // Computing the empirical best response 6: return (a(L)1 , · · · , a (L) N ) The intuition behind this algorithm is simple: if an action profile a−i can survive l rounds of IDE, then its best response ai (i.e., argmaxa∈Ai ui(a, a−i)) can survive at least l+1 rounds of IDE, since the action ai can only be eliminated after some actions in a−i are eliminated. Concretely, we start from an arbitrary action profile (a(0)1 , . . . , a (0) N ). In each round l ∈ [L], we compute the (empirical) best response of a(l−1)−i for each i ∈ [N ], and use those best responses to construct a new action profile (a(l)1 , . . . , a (l) N ). By constructing iterative best responses, we will end up with an action profile that can survive L rounds of IDE, which means surviving any number of rounds of IDE according to the definition of L. The full algorithm is presented in Algorithm 1, for which we have the following theoretical guarantee. Theorem 3. With M = ⌈ 16 ln(LNA/δ) ∆2 ⌉ , with probability 1− δ, Algorithm 1 returns an action profile that is ∆-rationalizable using a total of Õ ( LNA ∆2 ) samples. Wu et al. (2021) provide the first polynomial sample complexity results for finding rationalizable action profiles. They prove that the Exp3-DH algorithm is able to find a distribution with 1 − ζ fraction supported on ∆-rationalizable actions using Õ ( L1.5N3A1.5 ζ3∆3 ) samples under bandit feedback8. Compared to their result, our sample complexity bound Õ ( LNA ∆2 ) has more favorable dependence on all problem parameters, and our algorithm will output a distribution that is fully supported on rationalizable actions (thus has no dependence on ζ). We further complement Theorem 3 with a sample complexity lower bound showing that the linear dependency on N and A are optimal. This lower bound suggests that the Õ ( LNA ∆2 ) upper bound is tight up to logarithmic factors when L = O(1), and we conjecture that this is true for general L. Theorem 4. Even for games with L ≤ 2, any algorithm that returns a ∆-rationalizable action profile with 0.9 probability needs Ω ( NA ∆2 ) samples. Conjecture 5. The minimax optimal sample complexity for finding a ∆-rationalizable action profile is Θ ( LNA ∆2 ) for games with minimum elimination length L. 4 LEARNING RATIONALIZABLE COARSE CORRELATED EQUILIBRIA (CCE) In this section we introduce our algorithm for efficiently learning rationalizable CCEs. The high-level idea is to run no-regret Hedge-style algorithms for every player, while constraining the strategy inside the rationalizable region. Our algorithm is motivated by the fact that the probability of playing a dominated action will decay exponentially over time in the Hedge algorithm for adversarial bandit under full information feedback (Cohen et al., 2017). The full algorithm description is provided in Algorithm 2, and here we explain several key components in our algorithm design. Correlated Exploration Scheme. In the bandit feedback setting, standard exponential weights algorithms such as EXP3.IX require importance sampling and biased estimators to derive a highprobability regret bound (Neu, 2015). However, such bias could cause a dominating strategy to lose its advantage. In our algorithm we adopt a correlated exploration scheme, which essentially simulates full information feedback by bandit feedback using NA samples. Specifically, at every time step t, 8Wu et al. (2021)’s result allows trade-off between variables via different choice of algorithmic parameters. However, a ζ−1∆−3 factor is unavoidable regardless of choice of parameters. Algorithm 2 Hedge for Rationalizable ϵ-CCE 1: (a⋆1, · · · , a⋆N )← Algorithm 1 2: For all i ∈ [N ], initialize θ(1)i (·)← 1[· = a⋆i ] 3: for t = 1, · · · , T do 4: for i = 1, · · · , N do 5: For all a ∈ Ai, play (a, θ(t)−i) for Mt times, compute player i’s average payoff u (t) i (a) 6: Set θ(t+1)i (·) ∝ exp ( ηt ∑t τ=1 u (τ) i (·) ) 7: For all t ∈ [T ] and i ∈ [N ], eliminate all actions in θ(t)i with probability smaller than p, then renormalize the vector to simplex as θ̄(t)i 8: output: (∑T t=1⊗ni=1θ̄ (t) i ) /T the players take turn to enumerate their action set, while the other players fix their strategies according to Hedge. For i ∈ [N ] and t ≥ 2, we denote θ(t)i the strategy computed using Hedge for player i in round t. Joint strategy (a, θ(t)−i) is played to estimate player i’s payoff u (t) i (a). It is important to note that such correlated scheme does not require any communication between the players—the players can schedule the whole process before the game starts. Rationalizable Initialization and Variance Reduction. We use Algorithm 1, which learns a rationalizable action profile, to give the strategy for the first round. By carefully preserving the disadvantage of any iteratively dominated action, we keep the iterates inside the rationalizable region throughout the whole learning process. To ensure this for every iterate with high probability, a minibatch is used to reduce the variance of the estimator. Clipping. In the final step, we clip all actions with small probabilities, so that iteratively dominated actions do not appear in the output. The threshold is small enough to not affect the ϵ-CCE guarantee. 4.1 THEORETICAL GUARANTEE In Algorithm 2, we choose parameters in the following manner: ηt = max {√ lnA t , 4 ln(1/p) ∆t } ,Mt = ⌈ 64 ln(ANT/δ) ∆2t ⌉ , and p = min{ϵ,∆}8AN . (1) Note that our learning rate can be bigger than the standard learning rate in FTRL algorithms when t is small. The purpose is to guarantee the rationalizability of the iterates from the beginning of the learning process. As will be shown in the proof, this larger learning rate will not hurt the final rate. We now state the theoretical guarantee for Algorithm 2. Theorem 6. With parameters chosen as in Eq.(1) , after T = Õ ( 1 ϵ2 + 1 ϵ∆ ) rounds, with probability 1− 3δ, the output strategy of Algorithm 2 is a ∆-rationalizable ϵ-CCE.The total sample complexity is Õ ( LNA ∆2 + NA ϵ2 ) . Remark 7. Due to our lower bound (Theorem 4), an Õ(NA∆2 ) term is unavoidable since learning a rationalizable action profile is an easier task than learning rationalizable CCE. Based on our Conjecture 5, the additional L dependency is also likely to be inevitable. On the other hand, learning an ϵ-CCE alone only requires Õ( Aϵ2 ) samples, where as in our bound we have a larger Õ( NA ϵ2 ) term. The extra N factor is a consequence of our correlated exploration scheme in which only one player explores at a time. Removing this N factor might require more sophisticated exploration methods and utility estimators, which we leave as future work. Remark 8. Evoking Algorithm 1 requires knowledge of L, which may not be available in practice. In that case, an estimate L′ may be used in its stead. If L′ ≥ L (for instance when L′ = NA), we can recover the current rationalizability guarantee, albeit with a larger sample complexity scaling with L′. If L′ < L, we can still guarantee that the output policy avoids actions in EL′ , which are, informally speaking, actions that would be eliminated with L′ levels of reasoning. 4.1.1 OVERVIEW OF THE ANALYSIS We give an overview of our analysis of Algorithm 2 below. The full proof is deferred to Appendix C. Step 1: Ensure rationalizability. We will first show that rationalizability is preserved at each iterate, i.e., actions in EL will be played with low probability across all iterates. Formally, Lemma 9. With probability at least 1− 2δ, for all t ∈ [T ] and all i ∈ [N ], ai ∈ Ai ∩ EL, we have θ (t) i (ai) ≤ p. Here p is defined in (1). Lemma 9 guarantees that, after the clipping in Line 7 of Algorithm 2, the output correlated strategy be ∆-rationalizable. We proceed to explain the main idea for proving Lemma 9. A key observation is that the set of rationalizable actions, ∪ni=1Ai \EL, is closed under best response—for the i-th player, as long as the other players continue to play actions in∪j ̸=iAj\EL, actions inAi∩EL will suffer from excess losses each round in an exponential weights style algorithm. Concretely, for any a−i ∈ ( ∏ j ̸=iAj) \ EL and any iteratively dominated action ai ∈ Ai ∩ EL, there always exists xi ∈ ∆(Ai) such that ui(xi, a−i) ≥ ui(ai, a−i) + ∆. With our choice of p in Eq. (1), if other players choose their actions from ∪j ̸=iAj\EL with probability 1− pAN , we can still guarantee an excess loss of Ω(∆). It follows that∑t τ=1 u (τ) i (xi)− ∑t τ=1 u (τ) i (ai) ≥ Ω(t∆)− Sampling Noise. However, this excess loss can be obscured by the noise from bandit feedback when t is small. Note that it is crucial that the statement of Lemma 9 holds for all t due to the inductive nature of the proof. As a solution, we use a minibatch of size Mt = Õ ( ⌈ 1∆2t ⌉ ) in the t-th round to reduce the variance of the payoff estimator u(t)i . The noise term can now be upper-bounded with Azuma-Hoeffding by Sampling Noise ≤ Õ (√∑t τ=1 1 Mt ) ≤ O(t∆), Combining this with our choice of the learning rate ηt gives ηt (∑t τ=1 u (τ) i (xi)− ∑t τ=1 u (τ) i (ai) ) ≫ 1. (2) By the update rule of the Hedge algorithm, this implies that θ(t+1)i (ai) ≤ p, which enables us to complete the proof of Lemma 9 via induction on t. Step 2: Combine with no-regret guarantees. Next, we prove that the output strategy is an ϵ-CCE. For a player i ∈ [N ], the regret is defined as RegretiT = maxθ∈∆(Ai) ∑T t=1⟨u (t) i , θ − θ (t) i ⟩. We can obtain the following regret bound by standard analysis of FTRL with changing learning rates. Lemma 10. For all i ∈ [N ], RegretiT ≤ Õ (√ T + 1∆ ) . Here the additive 1/∆ term is the result of our larger Õ(∆−1t−1) learning rate for small t. It follows from Lemma 10 that T = Õ ( 1 ϵ2 + 1 ∆ϵ ) suffices to guarantee that the correlated strategy 1 T (∑T t=1⊗ni=1θ (t) i ) is an (ϵ/2)-CCE. Since pNA = O(ϵ), the clipping step only minorly affects the CCE guarantee and the clipped strategy 1T (∑T t=1⊗ni=1θ̄ (t) i ) is an ϵ-CCE. 4.2 APPLICATION TO LEARNING RATIONALIZABLE NASH EQUILIBRIUM Algorithm 2 can also be applied to two-player zero-sum games to learn a rationalizable ϵ-NE efficiently. Note that in two-player zero-sum games, the marginal distribution of an ϵ-CCE is guaranteed to be a 2ϵ-Nash (see, e.g., Proposition 9 in Bai et al. (2020)). Hence direct application of Algorithm 2 to a zero-sum game gives the following sample complexity bound. Corollary 11. In a two-player zero-sum game, the sample complexity for finding a ∆-rationalizable ϵ-Nash with Algorithm 2 is Õ ( LA ∆2 + A ϵ2 ) . This result improves over a direct application of Proposition 1, which gives Õ ( A3 ∆4 + A ϵ2 ) sample complexity and produces an ϵ-Nash that could still take unrationalizable actions with positive probability. Algorithm 3 Adaptive Hedge for Rationalizable ϵ-CE 1: (a⋆1, · · · , a⋆N )← Algorithm 1 2: For all i ∈ [N ], initialize θ(1)i ← (1− |Ai|p)1[· = a⋆i ] + p1 3: for t = 1, 2, . . . , T do 4: for i = 1, 2, . . . , N do 5: For all a ∈ Ai, play (a, θ(t)−i) for M (t) i times, compute player i’s average payoff u (t) i (a) 6: For all b ∈ Ai, set θ̂(t+1)i (·|b) ∝ exp ( ηbt,i ∑t τ=1 u (τ) i (·)θ (τ) i (b) ) 7: Find θ(t+1)i ∈ ∆(Ai) such that θ (t+1) i (a) = ∑ b∈Ai θ̂ (t+1) i (a|b)θ (t+1) i (b) 8: For all t ∈ [T ] and i ∈ [N ], eliminate all actions in θ(t)i with probability smaller than p, then renormalize the vector to simplex as θ̄(t)i 9: output: (∑T t=1⊗ni=1θ̄ (t) i ) /T 5 LEARNING RATIONALIZABLE CORRELATED EQUILIBRIUM In order to extend our results on ϵ-CCE to ϵ-CE, a natural approach would be augmenting Algorithm 2 with the celebrated Blum-Mansour reduction (Blum & Mansour, 2007) from swap-regret to external regret. In this reduction, one maintains A instances of a no-regret algorithm {Alg1, · · · ,AlgA}. In iteration t, the player would stack the recommendations of the A algorithms as a matrix, denoted by θ̂(t) ∈ RA×A, and compute its eigenvector θ(t) as the randomized strategy in round t. After observing the actual payoff vector u(t), it will pass the weighted payoff vector θ(t)(a)u(t) to algorithm Alga for each a. In this section, we focus on a fixed player i, and omit the subscript i when it’s clear from the context. Applying this reduction to Algorithm 2 directly, however, would fail to preserve rationalizability since the weighted loss vector θ(t)(a)u(t) admit a smaller utility gap θ(t)(a)∆. Specifically, consider an action b dominated by a mixed strategy x. In the payoff estimate of instance a,∑t τ=1 θ (τ)(a) ( u(τ)(b)− u(τ)(x) ) ≳ ∆ ∑t τ=1 θ (τ)(a)− √∑t τ=1 1 M(τ) ≱ 0, (3) which means that we cannot guarantee the elimination of IDAs every round as in Eq (2). In Algorithm 3, we address this by making ∑t τ=1 θ (τ)(a) play the role as t, tracking the progress of each no-regret instance separately. In time step t, we will compute the average payoff vector u(t) based on M (t) samples; then as in the Blum-Mansour reduction, we will update the A instances of Hedge with weighted payoffs θ(t)(a)u(t) and will use the eigenvector of θ̂ as the strategy for the next round. The key detail here is our choice of parameters, which adapts to the past strategies {θ(τ)}tτ=1: M (t) i := ⌈ maxa 64θ (t) i (a) ∆2· ∑t τ=1 θ (τ) i (a) ⌉ , ηat,i := max { 2 ln(1/p) ∆ ∑t τ=1 θ (τ) i (a) , √ A lnA t } , p = min{ϵ,∆}8AN . (4) Compared to Eq (1), we are essentially replacing t with an adaptive ∑t τ=1 θ (τ)(a). We can now improve (3) to∑t τ=1 θ (τ)(a) ( u(τ)(b)− u(τ)(x) ) ≳ ∆ ∑t τ=1 θ (τ)(a)− √∑t τ=1 θ(τ)(a)2 M(τ) ≳ ∆ ∑t τ=1 θ (τ)(a). (5) This together with our choice of ηat allows us to ensure the rationalizability of every iterate. The full algorithm is presented in Algorithm 3. We proceed to our theoretical guarantee for Algorithm 3. The analysis framework is largely similar to that of Algorithm 2. Our choice of M (t)i is sufficient to ensure ∆-rationalizability via AzumaHoeffding inequality, while swap-regret analysis of the algorithm proves that the average (clipped) strategy is indeed an ϵ-CE. The full proof is deferred to Appendix D. Theorem 12. With parameters in Eq. (4), after T = Õ ( A ϵ2 + A ∆2 ) rounds, with probability 1− 3δ, the output strategy of Algorithm 3 is a ∆-rationalizable ϵ-CE . The total sample complexity is Õ ( LNA ∆2 + NA2 min{∆2,ϵ2} ) . Algorithm 4 Rationalizable ϵ-CCE via Black-box Reduction 1: (a⋆1, · · · , a⋆N )← Algorithm 1 2: For all i ∈ [N ], initialize A(1)i ← {a⋆i } 3: for t = 1, 2, . . . do 4: Find an ϵ′-CCE Π with black-box algorithm O in the sub-game Πi∈[N ]A (t) i 5: ∀i ∈ [N ], a′i ∈ Ai, evaluate ui(a′i,Π−i) for M times and compute average ûi(a′i,Π−i) 6: for i ∈ [N ] do 7: Let a′i ← argmaxa∈Ai ûi(a,Π−i) // Computing the empirical best response 8: A(t+1)i ← A (t) i ∪ {a′i} 9: if A(t)i = A (t+1) i for all i ∈ [N ] then 10: return Π Compared to Theorem 6, our second term has an additional A factor, which is quite reasonable considering that algorithms for learning ϵ-CE take Õ(A2ϵ−2) samples, also A-times larger than the ϵ-CCE rate. 6 REDUCTION-BASED ALGORITHMS While Algorithm 2 and 3 make use of one specific no-regret algorithm, namely Hedge (Exponential Weights), in this section, we show that arbitrary algorithms for finding CCE/CE can be augmented to find rationalizable CCE/CE. The sample complexity obtained via this reduction is comparable with those of Algorithm 2 and 3 when L = Θ(NA), but slightly worse when L≪ NA. Moreover, this black-box approach would enable us to derive algorithms for rationalizable equilibria with more desirable qualities, such as last-iterate convergence, when using equilibria-finding algorithms with these properties. Suppose that we are given a black-box algorithm O that finds ϵ-CCE in arbitrary games. We can then use this algorithm in the following “support expansion” manner. We start with a subgame of only rationalizable actions, which can be identified efficiently with Algorithm 1, and call O to find an ϵ-CCE Π for the subgame. Next, we check for each i ∈ [N ] if the best response to Π−i is contained in Ai. If not, this means that the subgame’s ϵ-CCE may not be an ϵ-CCE for the full game; in this case, the best response to Π−i would be a rationalizable action that we can safely include into the action set. On the other hand, if the best response falls in Ai for all i, we can conclude that Π is also an ϵ-CCE for the original game. The details are given by Algorithm 4, and our main theoretical guarantee is the following. Theorem 13. Algorithm 4 outputs a ∆-rationalizable ϵ-CCE with high probability, using at most NA calls to the black-box CCE algorithm and Õ ( N2A2 min{ϵ2,∆2} ) additional samples. Using similar algorithmic techniques, we can develop a reduction scheme for rationalizable ϵ-CE. The detailed description for this algorithm is deferred to Appendix E. Here we only state its main theoretical guarantee. Theorem 14. There exists an algorithm that outputs a ∆-rationalizable ϵ-CE with high probability, using at most NA calls to a black-box CE algorithm and Õ ( N2A3 min{ϵ2,∆2} ) additional samples. 7 CONCLUSION In this paper, we consider two tasks: (1) learning rationalizable action profiles; (2) learning rationalizable equilibria. For task 1, we propose a conceptually simple algorithm whose sample complexity is significantly better than prior work (Wu et al., 2021). For task 2, we develop the first provably efficient algorithms for learning ϵ-CE and ϵ-CCE that are also rationalizable. Our algorithms are computationally efficient, enjoy sample complexity that scales polynomially with the number of players and are able to avoid iteratively dominated actions completely. Our results rely on several new techniques which might be of independent interests to the community. There remains a gap between our sample complexity upper bounds and the available lower bounds for both tasks, closing which is an important future research problem. ACKNOWLEDGEMENTS This work is supported by Office of Naval Research N00014-22-1-2253. Dingwen Kong is partially supported by the elite undergraduate training program of School of Mathematical Sciences in Peking University. A FURTHER DETAILS ON RATIONALIZABILITY A.1 EQUIVALENCE OF NEVER-BEST-RESPONSE AND STRICT DOMINANCE It is known that for finite normal form games, rationalizable actions are given by iterated elimination of never-best-response actions, which is in fact equivalent to the iterative elimination of strictly dominated actions (Osborne & Rubinstein, 1994, Lemma 60.1). Here, for completeness, we include a proof that the iterative elimination of of actions that are never ∆-best-response gives the same definition as Definition 1. Notice that it suffices to show that for every subgame, the set of never ∆-best response actions and the set of ∆-dominated actions are the same. Proposition A.1. Suppose that an action a ∈ Ai is never a ∆-best response, i.e. ∀Π−i ∈ ∆( ∏ j ̸=iAi), ∃u ∈ ∆(Ai) such that ui (a,Π−i) ≤ ui (u,Π−1)−∆. Then a is also ∆-dominated, i.e. ∃u ∈ ∆(Ai), ∀Π−i ∈ ∆( ∏ j ̸=iAi) ui (a,Π−i) ≤ ui (u,Π−1)−∆. Proof. That a is never a ∆-best response is equivalent to min Π−1 max u {ui (a,Π−i)− ui (u,Π−1)} ≤ −∆. That a is ∆-dominated is equivalent to max u min Π−1 {ui (a,Π−i)− ui (u,Π−1)} ≤ −∆. Equivalence immediately follows from von Neumman’s minimax theorem. A.2 PROOF OF PROPOSITION 1 Proof. We prove this inductively with the following hypothesis: ∀l ≥ 1,∀i ∈ [N ], ∑ a∈Ai x∗i (a) · 1[a ∈ El] ≤ 2lϵ ∆ . Base case: By the definition of ϵ-NE, ∀i ∈ [N ], ∀x′ ∈ ∆(Ai), ui(x ∗ i , x ∗ −i) ≥ ui(x′, x∗−i)− ϵ. Note that if ã ∈ E1 ∩ Ai, ∃x ∈ ∆(Ai) such that ∀a−i, ui(ã, a−i) ≤ ui(x, a−i)−∆. Therefore if we choose x′ := x∗i − ∑ a∈Ai 1[a ∈ E1]x∗i (a)ea + ∑ a∈Ai 1[a ∈ E1]x∗i (a) · x(a), that is if we play the dominating strategy instead of the dominated action in x∗i , then ui(x ′, x∗−i) ≥ ui(x∗i , x∗−i) + ∑ a∈Ai x∗i (a) · 1[a ∈ E1]∆. It follows that ∑ a∈Ai x∗i (a) · 1[a ∈ E1] ≤ ϵ ∆ . Induction step: By the induction hypothesis, ∀i ∈ [N ],∑ a∈Ai x∗i (a) · 1[a ∈ El] ≤ 2lϵ ∆ . Now consider x̃i := x∗i − ∑ a∈Ai 1[a ∈ El] · x ∗ i (a)ea 1− ∑ a∈Ai 1[a ∈ El] · x ∗ i (a) , (∀i ∈ [N ]) which is supported on actions on in El. The induction hypothesis implies ∥x̃i − x∗i ∥1 ≤ 6lϵ/∆. Therefore ∀i ∈ [N ], ∀a ∈ Ai, ∣∣ui(a, x̃−i)− ui(a, x∗−i)∣∣ ≤ 6Nlϵ∆ . Now if ã ∈ (El+1 \ El) ∩ Ai, since x̃−i is not supported on El, ∃x ∈ ∆(Ai) such that ui(ã, x̃−i) ≤ ui(x, x̃−i)−∆. It follows that ui(ã, x ∗ −i) ≤ ui(x, x∗−i)−∆+ 12Nlϵ ∆ ≤ ui(x, x∗−i)− ∆ 2 . Using the same arguments as in the base case,∑ a∈Ai x∗i (a) · 1[a ∈ El+1 \ El] ≤ ϵ ∆− 12Nlϵ∆ ≤ 2ϵ ∆ . It follows that ∀i ∈ [N ], ∑ a∈Ai x∗i (a) · 1[a ∈ El+1] ≤ 2(l + 1)ϵ ∆ . The statement is thus proved via induction on l. B FIND ONE RATIONALIZABLE ACTION PROFILE B.1 PROOF OF PROPOSITION 2 Proof. Consider the following N -player game denoted by G0 with action set [A]: ui (·) = 0 (1 ≤ i ≤ N − 1) uN (aN ) = ∆ · 1[aN > 1]. Specifically, a payoff with mean u is realized by a skewed Rademacher random variable with 1+u2 probability on +1 and 1−u2 on −1. In game G0, clearly for player N , the action 1 is ∆-dominated. However, consider the following game, denoted by Ga∗ (where a∗ ∈ [A]N−1) ui (·) = 0, (1 ≤ i ≤ N − 1) uN (aN ) = ∆, (aN > 1) uN (1, a−N ) = 2∆ · 1[a−N = a∗]. It can be seen that in game Ga∗ , for player N , the action 1 is not dominated or iteratively strictly dominated. Therefore, suppose that an algorithm O is able to determine whether an action is rationalizable (i.e. not iteratively strictly dominated) with 0.9 accuracy, then its output needs to be False with at least 0.9 probability in game G0, but True with at least 0.9 probability in game Ga∗ . By Pinsker’s inequality, KL(O(G0)||O(Ga∗)) ≥ 2 · 0.82 > 1, where we used O(G) to denote the trajectory generated by running algorithm O on game G. Meanwhile, notice that G0 and Ga∗ is different only when the first N − 1 players play a∗. Denote the number of times where the first N − 1 players play a∗ by n(a∗). Using the chain rule of KL-divergence, KL(O(G0)||O(Ga∗)) ≤ EG0 [n(a∗)] ·KL ( Ber ( 1 2 )∥∥∥∥Ber(1 + 2∆2 )) (a) ≤ EG0 [n(a∗)] · 1 1−2∆ 2 · (2∆)2 (b) ≤ 10∆2EG0 [n(a∗)] . Here (a) follows from reverse Pinsker’s inequality (see e.g. Binette (2019)), while (b) uses the fact that ∆ < 0.1. This means that for any a∗ ∈ [A]N−1, EG0 [n(a∗)] ≥ 1 10∆2 . It follows that the expected number of samples when running O on G0 is at least EG0 ∑ a∗∈[A]N−1 n(a∗) ≥ AN−1 10∆2 . B.2 PROOF OF THEOREM 3 Proof. We first present the concentration bound. For l ∈ [L], i ∈ [N ], and a ∈ Ai, by Hoeffding’s inequality we have that with probability at least 1− δLNA ,∣∣∣ui(a, a(l−1)−i )− ûi(a, a(l−1)−i )∣∣∣ ≤ √ 4 ln(ANL/δ) M ≤ ∆ 4 . Therefore by a union bound we have that with probability at least 1− δ, for all l ∈ [L], i ∈ [N ], and a ∈ Ai, ∣∣∣ui(a, a(l−1)−i )− ûi(a, a(l−1)−i )∣∣∣ ≤ ∆4 . We condition on this event for the rest of the proof. We use induction on l to prove that for all l ∈ [L] ∪ {0}, (a(l)1 , · · · , a (l) N ) can survive at least l rounds of IDE. The base case for l = 0 directly holds. Now we assume that the case for 1, 2, . . . , l− 1 holds and consider the case of l. For any i ∈ [N ], we show that a(l)i can survive at least l rounds of IDE. Recall that a (l) i is the empirical best response, i.e. a (l) i = argmax a∈Ai ûi(a, a (l−1) −i ). For any mixed strategy xi ∈ ∆(Ai), we have that ui(a (l) i , a (l−1) −i )− ui(xi, a (l−1) −i ) ≥ûi(a(l)i , a (l−1) −i )− ûi(xi, a (l−1) −i )− ∣∣∣ui(a(l)i , a(l−1)−i )− ûi(a(l)i , a(l−1)−i )∣∣∣− ∣∣∣ui(xi, a(l−1)−i )− ûi(xi, a(l−1)−i )∣∣∣ ≥0− ∆ 4 − ∆ 4 = −∆ 2 . Since actions in a(l−1)−i can survive at least l− 1 rounds of ∆-IDE, a (l) i cannot be ∆-dominated by xi in rounds 1, · · · , l. Since xi can be arbitrarily chosen, a(l)i can survive at least l rounds of ∆-IDE. We can now ensure that the output (a(L)1 , · · · , a (L) N ) survives L rounds of ∆-IDE, which is equivalent to ∆-rationalizability (see Definition 1). The total number of samples used is LNA ·M = Õ ( LNA ∆2 ) . B.3 PROOF OF THEOREM 4 Proof. Without loss of generality, assume that ∆ < 0.1. Consider the following instance where A1 = · · · = AN = [A]: ui(ai) = ∆ · 1[ai = 1], (i ̸= j) uj(aj , a−j) = { ∆ · 1[aj = 1] (a−j ̸= {1}N−1) ∆ · 1[aj = 1] + 2∆ · 1[aj = a] (a−j = {1}N−1) . Denote this instance by Gj,a. Additionally, define the following instance G0: ui(ai) = ∆ · 1[ai = 1]. (∀i ∈ [N ]) As before, a payoff with expectation u is realized as a random variable with distribution 2Ber( 1+u2 )−1. It can be seen that the only difference between G0 and Gj,a lies in uj(a, {1}N−1). By the KLdivergence chain rule, for any algorithm O, KL (O(G0)∥O(Gj,a)) ≤ 10∆2 · EG0 [ n(aj = a, a−j = {1}N−1) ] , where n(aj = a, a−j = {1}N−1) denotes the number of times the action profile (a, 1N−1) is played. Note that in G0, the only action profile surviving two rounds of ∆-IDE is (1, · · · , 1), while in Gj,a, the only rationalizable action profile is (1, · · · , 1︸ ︷︷ ︸ j−1 , a, 1, · · · , 1). To guarantee 0.9 accuracy, by Pinsker’s inequality, KL (O(G0)||O(Gj,a)) ≥ 1 2 |O(G0)−O(Gj,a)|2 > 1. It follows that ∀j ∈ [N ], a > 1, EG0 [ n(aj = a, a−j = {1}N−1) ] ≥ 1 10∆2 . Thus the total expected sample complexity is at least∑ a>1,j∈[N ] EG0 [ n(aj = a, a−j = {1}N−1) ] ≥ N(A− 1) 10∆2 . C OMITTED PROOFS IN SECTION 4 We start our analysis by bounding the sampling noise. For player i ∈ [N ], action ai ∈ Ai, and τ ∈ [T ], we denote the sampling noise as ξ (τ) i (ai) := u (τ) i (ai)− ui(ai, θ (τ) −i ). We have the following lemma. Lemma C.1. Let Ω1 denote the event that for all t ∈ [T ], i ∈ [N ], and ai ∈ Ai,∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai) ∣∣∣∣∣ ≤ 2 √√√√ln(ANT/δ) t∑ τ=1 1 Mτ . Then Pr[Ω1] ≥ 1− δ. Proof. Note that ∑t τ=1 ξ (τ) i (ai) can be written as the sum of ∑t τ=1 Mτ mean-zero bounded terms. By Azuma-Hoeffding inequality, with probability at least 1 − δANT , for a fixed i ∈ [N ], t ∈ [T ], ai ∈ Ai, ∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai) ∣∣∣∣∣ ≤ 2 √√√√ln(ANT/δ) t∑ τ=1 Mτ · ( 1 Mτ )2 . (6) A union bound over i ∈ [N ], t ∈ [T ], ai ∈ Ai proves the statement. Lemma C.2. With probability at least 1− 2δ, for all t ∈ [T ] and all i ∈ [N ], ai ∈ Ai ∩ EL, θ (t) i (ai) ≤ p. Proof. We condition on the event Ω1 defined in Lemma C.1 and the success of Algorithm 1. We prove the claim by induction in t. The base case for t = 1 holds directly by initialization. Now we assume the case for 1, 2, . . . , t holds and consider the case of t+ 1. Consider a fixed player i ∈ [N ] and iteratively dominated action ai ∈ Ai ∩EL. By definition there exists a mixed strategy xi such that for all a−i ∩ EL = ∅, ui(xi, a−i) ≥ ui(ai, a−i) + ∆. Therefore for τ ∈ [t], by the induction hypothesis for τ , ui(xi, θ (τ) −i ) ≥ ui(ai, θ (τ) −i ) + (1−ANp) ·∆−ANp ≥ ui(ai, θ(τ)−i ) + ∆/2. (7) Consequently, t∑ τ=1 (u (τ) i (xi)− u (τ) i (ai)) ≥ t∑ τ=1 (ui(xi, θ (τ) −i )− ui(ai, θ (τ) −i ))− 4 · √√√√ln(ANT/δ) t∑ τ=1 1 Mτ (By (6)) ≥ t∆ 2 − 4 · √√√√ln(ANT/δ) t∑ τ=1 1 Mτ (By (7)) ≥ t∆ 4 . Therefore by our choice of learning rate, θ (t+1) i (ai) ≤ exp ( −ηt · t∑ τ=1 ( u (τ) i (xi)− u (τ) i (ai) )) ≤ exp ( −4 ln(1/p) ∆t · ∆t 4 ) = p. Therefore θ (t+1) i (ai) ≤ p as desired. Now we turn to the ϵ-CCE guarantee. For a player i ∈ [N ], recall that the regret is defined as RegretiT = max θ∈∆(Ai) T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩. Lemma C.3. The regret can be bounded as RegretiT ≤ O (√ lnA · T + ln(1/p) lnT ∆ ) . Proof. Note that apart from the choice of θ(1), we are exactly running FTRL with learning rates ηt = max {√ lnA/t, 4 ln(1/p) ∆t } , which are monotonically decreasing. Therefore following the standard analysis of FTRL (see, e.g., Orabona (2019, Corollary 7.9)), we have max θ∈∆(Ai) T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ≤ 2 + lnA ηT + 1 2 T∑ t=1 ηt ≤ 2 + √ lnA · T + 1 2 T∑ t=1 (√ lnA t + 4 ln(1/p) ∆t ) = O (√ lnA · T + ln(1/p) lnT ∆ ) . However, this form of regret cannot directly imply approximate CCE. We define the following expected version regret Regreti,⋆T = max θ∈∆(Ai) T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩. The next lemma bound the difference between these two types of regret Lemma C.4. The following event Ω2 holds with probability at least 1− δ: for all i ∈ [N ]∣∣∣Regreti,⋆T − RegretiT ∣∣∣ ≤ O (√T · ln(NA/δ)) . Proof. We denote Θi := {e1, e2, . . . , e|Ai|} Therefore we have∣∣∣Regreti,⋆T − RegretiT ∣∣∣ = ∣∣∣∣∣ maxθ∈∆(Ai) T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩ − max θ∈∆(Ai) T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ∣∣∣∣∣ = ∣∣∣∣∣maxθ∈Θi T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩ −max θ∈Θi T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ∣∣∣∣∣ =max θ∈Θi ∣∣∣∣∣ T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩ − T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ∣∣∣∣∣ =max θ∈Θi ∣∣∣∣∣ T∑ t=1 ⟨ui(·, θ(t)−i)− u (t) i , θ − θ (t) i ⟩ ∣∣∣∣∣ Note that ⟨ui(·, θ(t)−i) − u (t) i , θ − θ (t) i ⟩ is a bounded martingale difference sequence. By AzumaHoeffding’s inequality, for a fixed θ ∈ Θi, with probability at least 1− δAN ,∣∣∣∣∣ T∑ t=1 ⟨ui(·, θ(t)−i)− u (t) i , θ − θ (t) i ⟩ ∣∣∣∣∣ ≤ O (√T · ln(NA/δ)) Thus we complete the proof by a union bound. Proof of Theorem 6. We condition on event Ω1 defined Lemma C.1, event Ω2 defined in Lemma C.4, and the success of Algorithm 1. Coarse Correlated Equilibria. By Lemma C.3 and Lemma C.4 we know that for all i ∈ [N ], Regreti,⋆T ≤ O (√ lnA · T + ln(1/p) lnT ∆ + √ T · ln(NA/δ) ) . Therefore choosing T = Θ ( ln(NA/δ) ϵ2 + ln2(NA/∆ϵδ) ∆ϵ ) will guarantee that Regreti,⋆T is at most ϵT/2 for all i ∈ [N ]. In this case the average strategy ( ∑T t=1⊗Ni=1θ (t) i )/T would be an (ϵ/2)-CCE. Finally, in the clipping step, ∥θ̄(t)i −θ (t) i ∥1 ≤ 2pA ≤ ϵ4N for all i ∈ [N ], t ∈ [T ]. Thus for all t ∈ [T ], we have ∥ ⊗ni=1 θ̄ (t) i −⊗ni=1θ (t) i ∥1 ≤ ϵ4 , which further implies∥∥∥∥∥( T∑ t=1 ⊗ni=1θ̄ (t) i )/T − ( T∑ t=1 ⊗ni=1θ (t) i )/T ∥∥∥∥∥ 1 ≤ ϵ 4 . Therefore the output strategy Π = ( ∑T t=1⊗Ni=1θ̄ (t) i )/T is an ϵ-CCE. Rationalizability. By Lemma C.2, if a ∈ EL ∩ Ai, θ(t)i (a) ≤ p for all t ∈ [T ]. It follows that θ̄ (t) i (a) = 0, i.e., the action would not be the support in the output strategy Π = ( ∑T t=1⊗Ni=1θ̄ (t) i )/T . Sample complexity. The total number of full-information queries is T∑ t=1 Mt ≤ T + T∑ t=1 64 ln(ANT/δ) ∆2t ≤ T + Õ ( 1 ∆2 ) = Õ ( 1 ∆2 + 1 ϵ2 ) . The total sample complexity for CCE learning would then be NA · T∑ t=1 Mt = Õ ( NA ϵ2 + NA ∆2 ) . Finally consider the cost of finding one IDE-surviving action profile (Õ ( LNA ∆2 ) ) and we get the claimed rate. D OMITTED PROOFS IN SECTION 5 Similar to the CCE case we first bound the sampling noise. For action ai ∈ Ai, and τ ∈ [T ], we denote the sampling noise as ξ (τ) i (ai) := u (τ) i (ai)− ui(ai, θ (τ) −i ). In the CE case, we are interested in the weighted sum of noise ∑t τ=1 ξ (τ) i (ai)θ (τ) i (bi), which is bounded in the following lemma. Lemma D.1. The following event Ω3 holds with probability at least 1− δ: for all t ∈ [T ], i ∈ [N ], and ai ∈ Ai, ∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai)θ (τ) i (bi) ∣∣∣∣∣ ≤ ∆4 t∑ τ=1 θ (τ) i (bi). Proof. Note that ∑t τ=1 ξ (τ) i (ai)θ (τ) i (bi) can be written as the sum of ∑t τ=1 M τ i mean-zero bounded terms. Precisely, there are Mτi terms bounded by θ (τ) i (bi) Mτi . By the Azuma-Hoeffding inequality, we have that with probability at least 1− δA2NT ,∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai)θ (τ) i (bi) ∣∣∣∣∣ ≤ 2 · √√√√ln(ANT/δ) t∑ τ=1 Mτi · ( θ (τ) i (bi) Mτi )2 = 2 · √√√√ln(ANT/δ) t∑ τ=1 (θ (τ) i (bi)) 2 Mτi ≤ ∆ 4 · √√√√ t∑ τ=1 θ (τ) i (bi) τ∑ j=1 θ (j) i (bi) ≤ ∆ 4 t∑ τ=1 θ (τ) i (bi) Therefore by a union bound we complete the proof. Lemma D.2. With probability at least 1− 2δ, for all t ∈ [T ], all i ∈ [N ], and all ai ∈ Ai ∩ EL, θ (t) i (ai) ≤ p Proof. We condition on the event Ω3 defined in Lemma D.1 and the success of Algorithm 1. We prove the claim by induction in t. The base case for t = 1 holds directly by initialization. Now we assume the case for 1, 2, . . . , t holds and consider the case of t+ 1. Consider a fixed player i ∈ [N ], an iteratively dominated action ai ∈ Ai ∩ EL, and an expert bi. By definition there exists a mixed strategy xi such that for all a−i ∩ EL = ∅, ui(xi, a−i) ≥ ui(ai, a−i) + ∆ Therefore for τ ∈ [t], by induction hypothesis we have ui(xi, θ (τ) −i ) ≥ ui(ai, θ (τ) −i ) + (1−ANp) ·∆−ANp ≥ ui(ai, θ(τ)−i ) + ∆/2 Thus we have t∑ τ=1 (u (τ) i (xi)− u (τ) i (ai)) · θ (τ) i (bi) ≥ t∑ τ=1 (ui(xi, θ (τ) −i )− ui(ai, θ (τ) −i )) · θ (τ) i (bi)− ∆ 4 t∑ τ=1 θ (τ) i (bi) ≥∆ 2 t∑ τ=1 θ (τ) i (bi)− ∆ 4 t∑ τ=1 θ (τ) i (bi) = ∆ 4 t∑ τ=1 θ (τ) i (bi) By our choice of learning rate, θ̂ (t+1) i (ai|bi) ≤ exp ( −ηbit,i · t∑ τ=1 θ (τ) i (bi) ( u (τ) i (xi)− u (τ) i (ai) )) ≤ exp ( − 4 ln(1/p) ∆ ∑t τ=1 θ (τ) i (b) · ∆ 4 t∑ i=1 θ (τ) i (b) ) = p. Therefore we conclude θ (t+1) i (ai) = ∑ bi∈Ai θ̂ (t+1) i (ai|bi)θ (t+1) i (bi) ≤ p Now we turn to the ϵ-CE guarantee. For a player i ∈ [N ], recall that the swap-regret is defined as SwapRegretiT := sup ϕ:Ai→Ai T∑ t=1 ∑ b∈Ai θ (t) i (b)u (t) i (ϕ(b))− T∑ t=1 〈 θ (t) i , u (t) i 〉 . Lemma D.3. For all i ∈ [N ], the swap-regret can be bounded as SwapRegretiT ≤ O (√ A ln(A)T + A ln(NAT/∆ϵ)2 ∆ ) . Proof. For i ∈ [N ], recall that the regret for an expert b ∈ Ai is defined as Regreti,bT := max a∈Ai T∑ t=1 θ (t) i (b)u (t) i (a)− T∑ t=1 〈 θ̂ (t) i (·|b), θ (t) i (b)u (t) i 〉 . Since θ(t)i (a) = ∑ b∈Ai θ̂ (t) i (a|b)θ (t) i (b) for all a and all t > 1,∑ b∈Ai Regreti,bT = ∑ b∈Ai max ab∈Ai T∑ t=1 θ (t) i (b)u (t) i (ab)− ∑ b∈Ai T∑ t=1 〈 θ̂ (t) i (·|b)θ (t) i (b), u (t) i 〉 = max ϕ:Ai→Ai ∑ b∈Ai T∑ t=1 θ (t) i (b)u (t) i (ϕ(b))− T∑ t=1 〈∑ b∈Ai θ̂ (t) i (·|b)θ (t) i (b), u (t) i 〉 ≥ max ϕ:Ai→Ai T∑ t=1 ∑ b∈Ai θ (t) i (b)u (t) i (ϕ(b))− T∑ t=2 〈 θ (t) i , u (t) i 〉 − 1 ≥ SwapRegretiT − 1. It now suffices to control the regret of each individual expert. For expert b, we are essentially running FTRL with learning rates ηbt,i := max { 4 ln(1/p) ∆ ∑t τ=1 θ (τ) i (b) , √ A lnA√ t } , which are clearly monotonically decreasing. Therefore using standard analysis of FTRL (see, e.g., Orabona (2019, Corollary 7.9)), Regreti,bT ≤ lnA ηbT,i + T∑ t=1 ηbt,i · θ (t) i (b) 2 ≤ √ T lnA A + T∑ t=1 θ (t) i (b) · √ A lnA t + 4 ln(1/p) ∆ · T∑ t=1 θ (t) i (b)∑t τ=1 θ (τ) i (b) ≤ √ T lnA A + T∑ t=1 θ (t) i (b) · √ A lnA t + 4 ln(1/p) ∆ ( 1 + ln ( T p )) . Here we used the fact that ∀b ∈ Ai, θ(1)i (b) ≥ p, and T∑ t=1 θ (t) i (b)∑τ i=1 θ (τ) i (b) ≤ 1 + ∫ ∑T t=1 θ (t) i (b) θ (1) i (b) ds s = 1 + ln (∑T t=1 θ (t) i (b) θ (1) i (b) ) ≤ 1 + ln ( T p ) . Notice that ∑ b∈Ai ∑T t=1 θ (t) i (b) · √ A lnA t ≤ O( √ A ln(A)T ). Therefore SwapRegretiT ≤ O(1) + ∑ b∈Ai Regreti,bT ≤ O (√ A ln(A)T + A ln(NAT/∆ϵ)2 ∆ ) . (8) Similar to the CCE case,, this form of regret can not directly imply approximate CE. We define the following expected version regret SwapRegreti,⋆T := sup ϕ:Ai→Ai T∑ t=1 〈 ϕ ◦ θ(t)i , ui(·, θ (t) −i) 〉 − T∑ t=1 〈 θ (t) i , ui(·, θ (t) −i) 〉 The next lemma bound the difference between these two types of regret Lemma D.4. The following event Ω4 has probability at least 1− δ: for all i ∈ [N ],∣∣∣SwapRegreti,⋆T − SwapRegretiT ∣∣∣ ≤ O (√ AT ln ( AN δ )) . Proof. Note that∣∣∣SwapRegreti,⋆T − SwapRegretiT ∣∣∣ = ∣∣∣∣∣ supϕ:Ai→Ai T∑ t=1 〈 ϕ ◦ θ(t)i − θ (t) i , ui(·, θ (t) −i) 〉 − sup ϕ:Ai→Ai T∑ t=1 〈 ϕ ◦ θ(t)i − θ (t) i , u (t) i 〉∣∣∣∣∣ ≤ sup ϕ:Ai→Ai ∣∣∣∣∣ T∑ t=1 〈 ϕ ◦ θ(t)i − θ (t) i , ui(·, θ (t) −i)− u (t) i 〉∣∣∣∣∣ . Notice that E[u(t)i ] = ui ( ·, θ(t)−i ) , and that u(t)i ∈ [−1, 1]A. Therefore, ∀ϕ : Ai → Ai, ξϕt := 〈 ϕ ◦ θ(t)i − θ (t) i , ui ( ·, θ(t)−i ) − u(t)i 〉 is a bounded martingale difference sequence. By AzumaHoeffding inequality, for a fixed ϕ : Ai → Ai, with probability 1− δ′,∣∣∣∣∣ T∑ t=1 ξϕt ∣∣∣∣∣ ≤ 2 √ 2T ln ( 2 δ′ ) . By setting δ′ = δ/(NAA), we get with probability 1− δ/N , ∀ϕ : Ai → Ai,∣∣∣∣∣ T∑ t=1 ξϕt ∣∣∣∣∣ ≤ 2 √ 2AT ln ( 2AN δ ) . Therefore we complete the proof by a union bound over i ∈ [N ]. Proof of Theorem 12. We condition on event Ω3 defined Lemma D.1, event Ω4 defined in Lemma D.4, and the success of Algorithm 1. Correlated Equilibrium. By Lemma D.3 and Lemma D.4 we know that for all i ∈ [N ], SwapRegreti,⋆T ≤ O (√ A ln(A)T + A ln(NAT/∆ϵ)2 ∆ + √ AT ln ( AN δ )) . Therefore choosing T = Θ ( A ln ( AN δ ) ϵ2 + A ln3 ( NA ∆ϵδ ) ∆ϵ ) will guarantee that SwapRegreti,⋆T is at most ϵT/2 for all i ∈ [N ]. In this case the average strategy ( ∑T t=1⊗Ni=1θ (t) i )/T would be an ϵ/2-CE. Finally, in the clipping step, ∥θ̄(t)i −θ (t) i ∥1 ≤ 2pA ≤ ϵ4N for all i ∈ [N ], t ∈ [T ]. Thus for all t ∈ [T ], we have ∥ ⊗ni=1 θ̄ (t) i −⊗ni=1θ (t) i ∥1 ≤ ϵ4 , which further implies∥∥∥∥∥( T∑ t=1 ⊗ni=1θ̄ (t) i )/T − ( T∑ t=1 ⊗ni=1θ (t) i )/T ∥∥∥∥∥ 1 ≤ ϵ 4 . Therefore the output strategy Π = ( ∑T t=1⊗Ni=1θ̄ (t) i )/T is an ϵ-CE. Rationalizability. By Lemma D.2, if a ∈ EL ∩ Ai, θ(t)i (a) ≤ p for all t ∈ [T ]. It follows that θ̄ (t) i (a) = 0, i.e., the action would not be the support in the output strategy Π = ( ∑ t⊗iθ̄ (t) i )/T . Sample complexity. The total number of queries is∑ i∈[N ] T∑ t=1 AM (t) i ≤ NAT + ∑ i∈[N ] ∑ b∈Ai T∑ t=1 16θ (t) i (b) ∆2 · ∑t τ=1 θ (τ) i (b) ≤ NAT + 16NA 2 ∆2 · ln(T/p) ≤ Õ ( NA2 ϵ2 + NA2 ∆2 ) , where we used the fact that T∑ t=1 θ (t) i (a)∑τ i=1 θ (τ) i (a) ≤ 1 + ln ( T p ) . Finally consider the cost of finding one IDE-surviving action profile (Õ ( LNA ∆2 ) ) and we get the claimed rate. E DETAILS FOR REDUCTION ALGORITHMS In this section, we present the details for the reduction based algorithm for finding rationalizable CE (Algorithm 5) and analysis of both Algorithm 4 and 5. E.1 RATIONALIZABLE CCE VIA REDUCTION We will choose ϵ′ = min{ϵ,∆}3 , M = ⌈ 4 ln(2NA/δ) ϵ′2 ⌉ . Lemma E.1. With probability 1−δ, throughout the execution of Algorithm 4, for every t and i ∈ [N ], a′i ∈ Ai, |ûi(a′i,Π−i)
1. What is the focus of the paper regarding computational methods for correlated equilibria in general games? 2. What are the strengths of the proposed algorithms, particularly in improving over previous works? 3. What are the weaknesses of the paper regarding its motivation, algorithmic nature, and practical applicability? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper presents algorithms for computing approximately rationalizable correlated equilibria (CE) and coarse correlated equilibria (CCE) in general games. Although these bounds are not shown to be optimal (except of the case where strategies are rationalizable after a constant number of iterated elimination rounds) they are significantly improved over the recent work of Wu et al. Finally the paper provides reduction schemes that find approximately-rationalizable epsilon-CCE/CE using black-box algorithms for epsilon-CE/CCE. Strengths And Weaknesses Strengths The paper makes recent progress on a paper about learning in games The writing is relatively clear Weaknesses The motivation behind finding rationalizable CCE does not seem to be particularly strong. Why is so important for the players to converge to a rationalizable CE/CCE? The proposed algorithms are rather unnatural and although can be used to compute CCE do not provide no-regret guarantees themselves. Hence, one would need a very strong justification of why this specific class of equilibria are particularly desirable. Also, if "natural" learning algorithms like Hedge and variants do not reproduce iteratively rationalizable CCE fast isn't this an indication that maybe iterative rationalizability is not a "natural" property? The algorithms are rather impractical to use and result in significant blow-ups e.g. the sampling complexity grows linearly in the number of agents whereas without it there is no dependence on the number of agents. There are no proposed interesting practical applications of this idea (also no experimental section) Clarity, Quality, Novelty And Reproducibility I believe the paper is relatively well written. The question that is examined here was introduced recent in the work of Wu et al. This work is effectively a direct follow-up.
ICLR
Title Learning Rationalizable Equilibria in Multiplayer Games Abstract A natural goal in multi-agent learning is to learn rationalizable behavior, where players learn to avoid any Iteratively Dominated Action (IDA). However, standard no-regret based equilibria-finding algorithms could take exponential samples to find such rationalizable strategies. In this paper, we first propose a simple yet sample-efficient algorithm for finding a rationalizable action profile in multi-player general-sum games under bandit feedback, which substantially improves over the results of Wu et al. (2021). We further develop algorithms with the first efficient guarantees for learning rationalizable Coarse Correlated Equilibria (CCE) and Correlated Equilibria (CE). Our algorithms incorporate several novel techniques to guarantee the elimination of IDA and no (swap-)regret simultaneously, including a correlated exploration scheme and adaptive learning rates, which may be of independent interest. We complement our results with a sample complexity lower bound showing the sharpness of our guarantees. 1 INTRODUCTION A common objective in multi-agent learning is to find various equilibria, such as Nash equilibria (NE), correlated equilibria (CE) and coarse correlated equilibria (CCE). Generally speaking, a player in equilibrium lacks incentive to deviate assuming conformity of other players to the same equilibrium. Equilibrium learning has been extensively studied in the literature of game theory and online learning, and no-regret based learners can provably learn approximate CE and CCE with both computational and statistical efficiency (Stoltz, 2005; Cesa-Bianchi & Lugosi, 2006). However, not all equilibria are created equal. As shown by Viossat & Zapechelnyuk (2013), a CCE can be entirely supported on dominated actions—actions that are worse off than some other strategy in all circumstances—which rational agents should apparently never play. Approximate CE also suffers from a similar problem. As shown by Wu et al. (2021, Theorem 1), there are examples where an ϵ-CE always plays iteratively dominated actions—actions that would be eliminated when iteratively deleting strictly dominated actions—unless ϵ is exponentially small. It is also shown that standard no-regret algorithms are indeed prone to finding such seemingly undesirable solutions (Wu et al., 2021). The intrinsic reason behind this is that CCE and approximate CE may not be rationalizable, and existing algorithms can indeed fail to find rationalizable solutions. Different from equilibria notions, rationalizability (Bernheim, 1984; Pearce, 1984) looks at the game from the perspective of a single player without knowledge of the actual strategies of other players, and only assumes common knowledge of their rationality. A rationalizable strategy will avoid strictly dominated actions, and assuming other players have also eliminated their dominated actions, iteratively avoid strictly dominated actions in the subgame. Rationalizability is a central solution concept in game theory (Osborne & Rubinstein, 1994) and has found applications in auctions (Battigalli & Siniscalchi, 2003) and mechanism design (Bergemann et al., 2011). If an (approximate) equilibrium only employs rationalizable actions, it would prevent irrational behavior such as playing dominated actions. Such equilibria are arguably more reasonable than ∗Equal contribution. unrationalizable ones, and constitute a stronger solution concept. This motivates us to consider the following open question: Can we efficiently learn equilibria that are also rationalizable? Despite its fundamental role in multi-agent reasoning, rationalizability is rarely studied from a learning perspective until recently, with Wu et al. (2021) giving the first algorithm for learning rationalizable strategies from bandit feedback. However, the problem of learning rationalizable CE and CCE remains a challenging open problem. Due to the existence of unrationalizable equilibria, running standard CE or CCE learners will not guarantee rationalizable solutions. On the other hand, one cannot hope to first identify all rationalizable actions and then find an equilibrium on the subgame, since even determining whether an action is rationalizable requires exponentially many samples (see Proposition 2). Therefore, achieving rationalizability and approximate equilibria simultaneously is nontrivial and presents new algorithmic challenges. In this work, we address the challenges above and give a positive answer to our main question. Our contributions can be summarized as follows: • As a first step, we provide a simple yet sample-efficient algorithm for identifying a ∆- rationalizable 1 action profile under bandit feedback, using only Õ ( LNA ∆2 ) 2 samples in normal- form games with N players, A actions per player and a minimum elimination length of L. This greatly improves the result of Wu et al. (2021) and is tight up to logarithmic factors when L = O(1). • Using the above algorithm as a subroutine, we develop exponential weights based algorithms that can provably find ∆-rationalizable ϵ-CCE using Õ ( LNA ∆2 + NA ϵ2 ) samples, and ∆-rationalizable ϵ-CE using Õ ( LNA ∆2 + NA2 min{ϵ2,∆2} ) samples. To the best of our knowledge, these are the first guarantees for learning rationalizable approximate CCE and CE. • We also provide reduction schemes that find ∆-rationalizable ϵ-CCE/CE using black-box algorithms for ϵ-CCE/CE. Despite having slightly worse rates, these algorithms can directly leverage the progress in equilibria finding, which may be of independent interest. 1.1 RELATED WORK Rationalizability and iterative dominance elimination. Rationalizability (Bernheim, 1984; Pearce, 1984) is a notion that captures rational reasoning in games and relaxes Nash Equilibrium. Rationalizability is closely related to the iterative elimination of dominated actions, which has been a focus of game theory research since the 1950s (Luce & Raiffa, 1957). It can be shown that an action is rationalizable if and only if it survives iterative elimination of strictly dominated actions3 (Pearce, 1984). There is also experimental evidence supporting iterative elimination of dominated strategies as a model of human reasoning (Camerer, 2011). Equilibria learning in games. There is a rich literature on applying online learning algorithms to learning equilibria in games. It is well-known that if all agents have no-regret, the resulting empirical average would be an ϵ-CCE (Young, 2004), while if all agents have no swap-regret, the resulting empirical average would be an ϵ-CE (Hart & Mas-Colell, 2000; Cesa-Bianchi & Lugosi, 2006). Later work continuing this line of research include those with faster convergence rates (Syrgkanis et al., 2015; Chen & Peng, 2020; Daskalakis et al., 2021), last-iterate convergence guarantees (Daskalakis & Panageas, 2018; Wei et al., 2020), and extension to extensive-form games (Celli et al., 2020; Bai et al., 2022b;a; Song et al., 2022) and Markov games (Song et al., 2021; Jin et al., 2021). Computational and learning aspect of rationalizability. Despite its conceptual importance, rationalizability and iterative dominance elimination are not well studied from a computational or learning perspective. For iterative strict dominance elimination in two-player games, Knuth et al. (1988) provided a cubic-time algorithm and proved that the problem is P-complete. The weak dominance version of the problem is proven to be NP-complete by Conitzer & Sandholm (2005). 1An action is ∆-rationalizable if it survives iterative elimination of ∆-dominated actions; c.f. Definition 1. 2Throughout this paper, we use Õ to suppress logarithmic factors in N , A, L, 1 ∆ , 1 δ , and 1 ϵ . 3For this equivalence to hold, we need to allow dominance by mixed strategies, and correlated beliefs when there are more than two players. These conditions are met in the setting of this work. Hofbauer & Weibull (1996) showed that in a class of learning dynamics which includes replicator dynamics — the continuous-time variant of Follow-The-Regularzied-Leader (FTRL), all iteratively strictly dominated actions vanish over time, while Mertikopoulos & Moustakas (2010) proved similar results for stochastic replicator dynamics; however, neither work provides finite-time guarantees. Cohen et al. (2017) proved that Hedge eliminates dominated actions in finite time, but did not extend their results to the more challenging case of iteratively dominated actions. The most related work in literature is the work on learning rationalizable actions by Wu et al. (2021), who proposed the Exp3-DH algorithm to find a strategy mostly supported on rationalizable actions with a polynomial rate. Our Algorithm 1 accomplishes the same task with a faster rate, while our Algorithms 2 & 3 deal with the more challenging problems of finding ϵ-CE/CCE that are also rationalizable. Although Exp3-DH is based on a no-regret algorithm, it does not enjoy regret or weighted regret guarantees and thus does not provably find rationalizable equilibria. 2 PRELIMINARY An N -player normal-form game involves N players whose action space are denoted by A = A1× · · ·×AN , and is defined by utility functions u1, · · · , uN :A → [0, 1]. Let A = maxi∈[N ] |Ai| denote the maximum number of actions per player, xi denote a mixed strategy of the i-th player (i.e., a distribution over Ai) and x−i denote a (correlated) mixed strategy of the other players (i.e., a distribution over ∏ j ̸=iAj). We further denote ui(xi, x−i) := Eai∼xi,a−i∼x−iui(ai, a−i). We use ∆(S) to denote a distribution over the set S. Learning from bandit feedback We consider the bandit feedback setting where in each round, each player i ∈ [N ] chooses an action ai ∈ Ai, and then observes a random feedback Ui ∈ [0, 1] such that E[Ui|a1, a2, · · · , an] = ui(a1, a2, · · · , an). 2.1 RATIONALIZABILITY An action a ∈ Ai is said to be rationalizable if it could be the best response to some (possibly correlated) belief of other players’ strategies, assuming that they are also rational. In other words, the set of rationalizable actions is obtained by iteratively removing actions that could never be a best response. For finite normal-form games, this is in fact equivalent to the iterative elimination of strictly dominated actions4 (Osborne & Rubinstein, 1994, Lemma 60.1). Definition 1 (∆-Rationalizability). 5 Define E1 := ⋃N i=1 {a ∈ Ai : ∃x ∈ ∆(Ai),∀a−i, ui(a, a−i) ≤ ui(x, a−i)−∆} , which is the set of ∆-dominated actions for all players. Further define El := ⋃N i=1 {a ∈ Ai : ∃x ∈ ∆(Ai),∀a−i s.t. a−i ∩ El−1 = ∅, ui(a, a−i) ≤ ui(x, a−i)−∆} , which is the set of actions that would be eliminated by the l-th round. Define L = inf{l : El+1 = El} as the minimum elimination length, and EL as the set of ∆-iteratively dominated actions (∆-IDAs). Actions in ∪ni=1Ai \ EL are said to be ∆-rationalizable. Notice that E1 ⊆ · · · ⊆ EL = EL+1. Here ∆ plays a similar role as the reward gap for best arm identification in stochastic multi-armed bandits. We will henceforth use ∆-rationalizability and survival of L rounds of iterative dominance elimination (IDE) interchangeably6. Since one cannot eliminate all the actions of a player, |EL| ≥ N , which further implies L ≤ N(A− 1) < NA. 2.2 EQUILIBRIA IN GAMES We consider three common learning objectives, namely Nash Equilibrium (NE), Correlated Equilibrium (CE) and Coarse Correlated Equilibrium (CCE). 4See, e.g., the Diamond-In-the-Rough (DIR) games in Wu et al. (2021, Definition 2) for a concrete example of iterative dominance elimination. 5Here we slightly abuse the notation and use ∆ to refer to both the gap and the probability simplex. 6Alternatively one can also define ∆-rationalizability by the iterative elimination of actions that are never ∆-best response, which is mathematically equivalent to Definition 1 (see Appendix A.1). Definition 2 (Nash Equilibrium). A strategy profile (x1, · · · , xN ) is an ϵ-Nash equilibrium if ui(xi, x−i) ≥ ui(a, x−i)− ϵ,∀a ∈ Ai,∀i ∈ [N ]. Definition 3 (Correlated Equilibrium). A correlated strategy Π ∈ ∆(A) is an ϵ-correlated equilibrium if ∀i ∈ [N ],∀ϕ : Ai → Ai,∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(ai, a−i) ≥ ∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(ϕ(ai), a−i)− ϵ. Definition 4 (Coarse Correlated Equilibrium). A correlated strategy Π ∈ ∆(A) is an ϵ-CCE if ∀i ∈ [N ],∀a′ ∈ Ai,∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(ai, a−i) ≥ ∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(a ′, a−i)− ϵ. When ϵ = 0, the above definitions give exact Nash equilibrium, correlated equilibrium, and coarse correlated equilibrium, respectively. It is well known that ϵ-NE are ϵ-CE, and ϵ-CE are ϵ-CCE. Furthermore, we call an ϵ-CCE/CE that only plays ∆-rationalizable actions a.s. a ∆-rationalizable ϵ-CCE/CE. 2.3 CONNECTION BETWEEN EQUILIBRIA AND RATIONALIZABILITY It is known that all actions in the support of an exact CE are rationalizable (Osborne & Rubinstein, 1994, Lemma 56.2). However, one can easily construct an exact CCE that is supported on dominated (hence, unrationalizable) actions (see e.g. Viossat & Zapechelnyuk (2013, Fig. 3)). One might be tempted to suggest that running a CE solver immediately finds a CE (and hence CCE) that is also rationalizable. However, the connection between CE and rationalizability becomes quite different when it comes to approximate equilibria, which are inevitable in the presence of noise. As shown by Wu et al. (2021, Theorem 1), an ϵ-CE can be entirely supported on iteratively dominated action, unless ϵ = O(2−A). In other words, rationalizability is not guaranteed by running an approximate CE solver unless with an extremely high accuracy. Therefore, finding ϵ-CE and CCE that are simultaneously rationalizable remains a challenging open problem. Since NE is a subset of CE, all actions in the support of an (exact) NE would also be rationalizable. Unlike approximate CE, for ϵ < poly(∆, 1/N, 1/A)), one can show that any ϵ-Nash equilibrium is still mostly supported on rationalizable actions. Proposition 1. If x∗ = (x∗1, · · · , x∗N ) is an ϵ-Nash with ϵ < ∆ 2 24N2A , ∀i, Pra∼x∗i [a ∈ EL] ≤ 2Lϵ ∆ . Therefore, for two-player zero-sum games, it is possible to run an approximate NE solver and automatically find a rationalizable ϵ-NE. However, this method will induce a rather slow rate7, and we will provide a much more efficient algorithm for finding rationalizable ϵ-NE in Section 4. 3 LEARNING RATIONALIZABLE ACTION PROFILES In order to learn a rationalizable CE/CCE, one might suggest identifying the set of all rationalizable actions, and then learn CE or CCE on this subgame. Unfortunately, as shown by Proposition 2, even the simpler problem of deciding whether one single action is rationalizable is statistically hard. Proposition 2. For ∆ < 0.1, any algorithm that correctly decides whether an action is ∆- rationalizable with 0.9 probability needs Ω(AN−1∆−2) samples. This negative result motivates us to consider an easier task: can we at least find one rationalizable action profile sample-efficiently? Formally, we say a action profile (a1, . . . , aN ) is rationalizable if for all i ∈ [N ], ai is a rationalizable action. This is arguably one of the most fundamental tasks regarding rationalizability. For mixed-strategy dominance solvable games (Alon et al., 2021), the unique rationalizable action profile will be the unique NE and also the unique CE of the game. Therefore this easier task per se is still of practical importance. In this section we answer this question in the affirmative. We provide a sample-efficient algorithm which finds a rationalizable action profile using only Õ ( LNA ∆2 ) samples. This algorithm will also serve as an important subroutine for algorithms finding rationalizable CCE/CE in the later sections. 7For two-player zero-sum games, the marginals of any CCE is an NE so NE can be found efficiently. This is not true for general games, where finding NE is computationally hard and takes Ω(2N ) samples. Algorithm 1 Iterative Best Response 1: Initialization: choose a(0)i ∈ Ai arbitrarily for all i ∈ [N ] 2: for l = 1, · · · , L do 3: for i ∈ [N ] do 4: For all a ∈ Ai, play (a, a(l−1)−i ) for M times, compute player i’s average payoff ûi(a, a (l−1) −i ) 5: Set a(l)i ← argmaxa∈Ai ûi(a, a (l−1) −i ) // Computing the empirical best response 6: return (a(L)1 , · · · , a (L) N ) The intuition behind this algorithm is simple: if an action profile a−i can survive l rounds of IDE, then its best response ai (i.e., argmaxa∈Ai ui(a, a−i)) can survive at least l+1 rounds of IDE, since the action ai can only be eliminated after some actions in a−i are eliminated. Concretely, we start from an arbitrary action profile (a(0)1 , . . . , a (0) N ). In each round l ∈ [L], we compute the (empirical) best response of a(l−1)−i for each i ∈ [N ], and use those best responses to construct a new action profile (a(l)1 , . . . , a (l) N ). By constructing iterative best responses, we will end up with an action profile that can survive L rounds of IDE, which means surviving any number of rounds of IDE according to the definition of L. The full algorithm is presented in Algorithm 1, for which we have the following theoretical guarantee. Theorem 3. With M = ⌈ 16 ln(LNA/δ) ∆2 ⌉ , with probability 1− δ, Algorithm 1 returns an action profile that is ∆-rationalizable using a total of Õ ( LNA ∆2 ) samples. Wu et al. (2021) provide the first polynomial sample complexity results for finding rationalizable action profiles. They prove that the Exp3-DH algorithm is able to find a distribution with 1 − ζ fraction supported on ∆-rationalizable actions using Õ ( L1.5N3A1.5 ζ3∆3 ) samples under bandit feedback8. Compared to their result, our sample complexity bound Õ ( LNA ∆2 ) has more favorable dependence on all problem parameters, and our algorithm will output a distribution that is fully supported on rationalizable actions (thus has no dependence on ζ). We further complement Theorem 3 with a sample complexity lower bound showing that the linear dependency on N and A are optimal. This lower bound suggests that the Õ ( LNA ∆2 ) upper bound is tight up to logarithmic factors when L = O(1), and we conjecture that this is true for general L. Theorem 4. Even for games with L ≤ 2, any algorithm that returns a ∆-rationalizable action profile with 0.9 probability needs Ω ( NA ∆2 ) samples. Conjecture 5. The minimax optimal sample complexity for finding a ∆-rationalizable action profile is Θ ( LNA ∆2 ) for games with minimum elimination length L. 4 LEARNING RATIONALIZABLE COARSE CORRELATED EQUILIBRIA (CCE) In this section we introduce our algorithm for efficiently learning rationalizable CCEs. The high-level idea is to run no-regret Hedge-style algorithms for every player, while constraining the strategy inside the rationalizable region. Our algorithm is motivated by the fact that the probability of playing a dominated action will decay exponentially over time in the Hedge algorithm for adversarial bandit under full information feedback (Cohen et al., 2017). The full algorithm description is provided in Algorithm 2, and here we explain several key components in our algorithm design. Correlated Exploration Scheme. In the bandit feedback setting, standard exponential weights algorithms such as EXP3.IX require importance sampling and biased estimators to derive a highprobability regret bound (Neu, 2015). However, such bias could cause a dominating strategy to lose its advantage. In our algorithm we adopt a correlated exploration scheme, which essentially simulates full information feedback by bandit feedback using NA samples. Specifically, at every time step t, 8Wu et al. (2021)’s result allows trade-off between variables via different choice of algorithmic parameters. However, a ζ−1∆−3 factor is unavoidable regardless of choice of parameters. Algorithm 2 Hedge for Rationalizable ϵ-CCE 1: (a⋆1, · · · , a⋆N )← Algorithm 1 2: For all i ∈ [N ], initialize θ(1)i (·)← 1[· = a⋆i ] 3: for t = 1, · · · , T do 4: for i = 1, · · · , N do 5: For all a ∈ Ai, play (a, θ(t)−i) for Mt times, compute player i’s average payoff u (t) i (a) 6: Set θ(t+1)i (·) ∝ exp ( ηt ∑t τ=1 u (τ) i (·) ) 7: For all t ∈ [T ] and i ∈ [N ], eliminate all actions in θ(t)i with probability smaller than p, then renormalize the vector to simplex as θ̄(t)i 8: output: (∑T t=1⊗ni=1θ̄ (t) i ) /T the players take turn to enumerate their action set, while the other players fix their strategies according to Hedge. For i ∈ [N ] and t ≥ 2, we denote θ(t)i the strategy computed using Hedge for player i in round t. Joint strategy (a, θ(t)−i) is played to estimate player i’s payoff u (t) i (a). It is important to note that such correlated scheme does not require any communication between the players—the players can schedule the whole process before the game starts. Rationalizable Initialization and Variance Reduction. We use Algorithm 1, which learns a rationalizable action profile, to give the strategy for the first round. By carefully preserving the disadvantage of any iteratively dominated action, we keep the iterates inside the rationalizable region throughout the whole learning process. To ensure this for every iterate with high probability, a minibatch is used to reduce the variance of the estimator. Clipping. In the final step, we clip all actions with small probabilities, so that iteratively dominated actions do not appear in the output. The threshold is small enough to not affect the ϵ-CCE guarantee. 4.1 THEORETICAL GUARANTEE In Algorithm 2, we choose parameters in the following manner: ηt = max {√ lnA t , 4 ln(1/p) ∆t } ,Mt = ⌈ 64 ln(ANT/δ) ∆2t ⌉ , and p = min{ϵ,∆}8AN . (1) Note that our learning rate can be bigger than the standard learning rate in FTRL algorithms when t is small. The purpose is to guarantee the rationalizability of the iterates from the beginning of the learning process. As will be shown in the proof, this larger learning rate will not hurt the final rate. We now state the theoretical guarantee for Algorithm 2. Theorem 6. With parameters chosen as in Eq.(1) , after T = Õ ( 1 ϵ2 + 1 ϵ∆ ) rounds, with probability 1− 3δ, the output strategy of Algorithm 2 is a ∆-rationalizable ϵ-CCE.The total sample complexity is Õ ( LNA ∆2 + NA ϵ2 ) . Remark 7. Due to our lower bound (Theorem 4), an Õ(NA∆2 ) term is unavoidable since learning a rationalizable action profile is an easier task than learning rationalizable CCE. Based on our Conjecture 5, the additional L dependency is also likely to be inevitable. On the other hand, learning an ϵ-CCE alone only requires Õ( Aϵ2 ) samples, where as in our bound we have a larger Õ( NA ϵ2 ) term. The extra N factor is a consequence of our correlated exploration scheme in which only one player explores at a time. Removing this N factor might require more sophisticated exploration methods and utility estimators, which we leave as future work. Remark 8. Evoking Algorithm 1 requires knowledge of L, which may not be available in practice. In that case, an estimate L′ may be used in its stead. If L′ ≥ L (for instance when L′ = NA), we can recover the current rationalizability guarantee, albeit with a larger sample complexity scaling with L′. If L′ < L, we can still guarantee that the output policy avoids actions in EL′ , which are, informally speaking, actions that would be eliminated with L′ levels of reasoning. 4.1.1 OVERVIEW OF THE ANALYSIS We give an overview of our analysis of Algorithm 2 below. The full proof is deferred to Appendix C. Step 1: Ensure rationalizability. We will first show that rationalizability is preserved at each iterate, i.e., actions in EL will be played with low probability across all iterates. Formally, Lemma 9. With probability at least 1− 2δ, for all t ∈ [T ] and all i ∈ [N ], ai ∈ Ai ∩ EL, we have θ (t) i (ai) ≤ p. Here p is defined in (1). Lemma 9 guarantees that, after the clipping in Line 7 of Algorithm 2, the output correlated strategy be ∆-rationalizable. We proceed to explain the main idea for proving Lemma 9. A key observation is that the set of rationalizable actions, ∪ni=1Ai \EL, is closed under best response—for the i-th player, as long as the other players continue to play actions in∪j ̸=iAj\EL, actions inAi∩EL will suffer from excess losses each round in an exponential weights style algorithm. Concretely, for any a−i ∈ ( ∏ j ̸=iAj) \ EL and any iteratively dominated action ai ∈ Ai ∩ EL, there always exists xi ∈ ∆(Ai) such that ui(xi, a−i) ≥ ui(ai, a−i) + ∆. With our choice of p in Eq. (1), if other players choose their actions from ∪j ̸=iAj\EL with probability 1− pAN , we can still guarantee an excess loss of Ω(∆). It follows that∑t τ=1 u (τ) i (xi)− ∑t τ=1 u (τ) i (ai) ≥ Ω(t∆)− Sampling Noise. However, this excess loss can be obscured by the noise from bandit feedback when t is small. Note that it is crucial that the statement of Lemma 9 holds for all t due to the inductive nature of the proof. As a solution, we use a minibatch of size Mt = Õ ( ⌈ 1∆2t ⌉ ) in the t-th round to reduce the variance of the payoff estimator u(t)i . The noise term can now be upper-bounded with Azuma-Hoeffding by Sampling Noise ≤ Õ (√∑t τ=1 1 Mt ) ≤ O(t∆), Combining this with our choice of the learning rate ηt gives ηt (∑t τ=1 u (τ) i (xi)− ∑t τ=1 u (τ) i (ai) ) ≫ 1. (2) By the update rule of the Hedge algorithm, this implies that θ(t+1)i (ai) ≤ p, which enables us to complete the proof of Lemma 9 via induction on t. Step 2: Combine with no-regret guarantees. Next, we prove that the output strategy is an ϵ-CCE. For a player i ∈ [N ], the regret is defined as RegretiT = maxθ∈∆(Ai) ∑T t=1⟨u (t) i , θ − θ (t) i ⟩. We can obtain the following regret bound by standard analysis of FTRL with changing learning rates. Lemma 10. For all i ∈ [N ], RegretiT ≤ Õ (√ T + 1∆ ) . Here the additive 1/∆ term is the result of our larger Õ(∆−1t−1) learning rate for small t. It follows from Lemma 10 that T = Õ ( 1 ϵ2 + 1 ∆ϵ ) suffices to guarantee that the correlated strategy 1 T (∑T t=1⊗ni=1θ (t) i ) is an (ϵ/2)-CCE. Since pNA = O(ϵ), the clipping step only minorly affects the CCE guarantee and the clipped strategy 1T (∑T t=1⊗ni=1θ̄ (t) i ) is an ϵ-CCE. 4.2 APPLICATION TO LEARNING RATIONALIZABLE NASH EQUILIBRIUM Algorithm 2 can also be applied to two-player zero-sum games to learn a rationalizable ϵ-NE efficiently. Note that in two-player zero-sum games, the marginal distribution of an ϵ-CCE is guaranteed to be a 2ϵ-Nash (see, e.g., Proposition 9 in Bai et al. (2020)). Hence direct application of Algorithm 2 to a zero-sum game gives the following sample complexity bound. Corollary 11. In a two-player zero-sum game, the sample complexity for finding a ∆-rationalizable ϵ-Nash with Algorithm 2 is Õ ( LA ∆2 + A ϵ2 ) . This result improves over a direct application of Proposition 1, which gives Õ ( A3 ∆4 + A ϵ2 ) sample complexity and produces an ϵ-Nash that could still take unrationalizable actions with positive probability. Algorithm 3 Adaptive Hedge for Rationalizable ϵ-CE 1: (a⋆1, · · · , a⋆N )← Algorithm 1 2: For all i ∈ [N ], initialize θ(1)i ← (1− |Ai|p)1[· = a⋆i ] + p1 3: for t = 1, 2, . . . , T do 4: for i = 1, 2, . . . , N do 5: For all a ∈ Ai, play (a, θ(t)−i) for M (t) i times, compute player i’s average payoff u (t) i (a) 6: For all b ∈ Ai, set θ̂(t+1)i (·|b) ∝ exp ( ηbt,i ∑t τ=1 u (τ) i (·)θ (τ) i (b) ) 7: Find θ(t+1)i ∈ ∆(Ai) such that θ (t+1) i (a) = ∑ b∈Ai θ̂ (t+1) i (a|b)θ (t+1) i (b) 8: For all t ∈ [T ] and i ∈ [N ], eliminate all actions in θ(t)i with probability smaller than p, then renormalize the vector to simplex as θ̄(t)i 9: output: (∑T t=1⊗ni=1θ̄ (t) i ) /T 5 LEARNING RATIONALIZABLE CORRELATED EQUILIBRIUM In order to extend our results on ϵ-CCE to ϵ-CE, a natural approach would be augmenting Algorithm 2 with the celebrated Blum-Mansour reduction (Blum & Mansour, 2007) from swap-regret to external regret. In this reduction, one maintains A instances of a no-regret algorithm {Alg1, · · · ,AlgA}. In iteration t, the player would stack the recommendations of the A algorithms as a matrix, denoted by θ̂(t) ∈ RA×A, and compute its eigenvector θ(t) as the randomized strategy in round t. After observing the actual payoff vector u(t), it will pass the weighted payoff vector θ(t)(a)u(t) to algorithm Alga for each a. In this section, we focus on a fixed player i, and omit the subscript i when it’s clear from the context. Applying this reduction to Algorithm 2 directly, however, would fail to preserve rationalizability since the weighted loss vector θ(t)(a)u(t) admit a smaller utility gap θ(t)(a)∆. Specifically, consider an action b dominated by a mixed strategy x. In the payoff estimate of instance a,∑t τ=1 θ (τ)(a) ( u(τ)(b)− u(τ)(x) ) ≳ ∆ ∑t τ=1 θ (τ)(a)− √∑t τ=1 1 M(τ) ≱ 0, (3) which means that we cannot guarantee the elimination of IDAs every round as in Eq (2). In Algorithm 3, we address this by making ∑t τ=1 θ (τ)(a) play the role as t, tracking the progress of each no-regret instance separately. In time step t, we will compute the average payoff vector u(t) based on M (t) samples; then as in the Blum-Mansour reduction, we will update the A instances of Hedge with weighted payoffs θ(t)(a)u(t) and will use the eigenvector of θ̂ as the strategy for the next round. The key detail here is our choice of parameters, which adapts to the past strategies {θ(τ)}tτ=1: M (t) i := ⌈ maxa 64θ (t) i (a) ∆2· ∑t τ=1 θ (τ) i (a) ⌉ , ηat,i := max { 2 ln(1/p) ∆ ∑t τ=1 θ (τ) i (a) , √ A lnA t } , p = min{ϵ,∆}8AN . (4) Compared to Eq (1), we are essentially replacing t with an adaptive ∑t τ=1 θ (τ)(a). We can now improve (3) to∑t τ=1 θ (τ)(a) ( u(τ)(b)− u(τ)(x) ) ≳ ∆ ∑t τ=1 θ (τ)(a)− √∑t τ=1 θ(τ)(a)2 M(τ) ≳ ∆ ∑t τ=1 θ (τ)(a). (5) This together with our choice of ηat allows us to ensure the rationalizability of every iterate. The full algorithm is presented in Algorithm 3. We proceed to our theoretical guarantee for Algorithm 3. The analysis framework is largely similar to that of Algorithm 2. Our choice of M (t)i is sufficient to ensure ∆-rationalizability via AzumaHoeffding inequality, while swap-regret analysis of the algorithm proves that the average (clipped) strategy is indeed an ϵ-CE. The full proof is deferred to Appendix D. Theorem 12. With parameters in Eq. (4), after T = Õ ( A ϵ2 + A ∆2 ) rounds, with probability 1− 3δ, the output strategy of Algorithm 3 is a ∆-rationalizable ϵ-CE . The total sample complexity is Õ ( LNA ∆2 + NA2 min{∆2,ϵ2} ) . Algorithm 4 Rationalizable ϵ-CCE via Black-box Reduction 1: (a⋆1, · · · , a⋆N )← Algorithm 1 2: For all i ∈ [N ], initialize A(1)i ← {a⋆i } 3: for t = 1, 2, . . . do 4: Find an ϵ′-CCE Π with black-box algorithm O in the sub-game Πi∈[N ]A (t) i 5: ∀i ∈ [N ], a′i ∈ Ai, evaluate ui(a′i,Π−i) for M times and compute average ûi(a′i,Π−i) 6: for i ∈ [N ] do 7: Let a′i ← argmaxa∈Ai ûi(a,Π−i) // Computing the empirical best response 8: A(t+1)i ← A (t) i ∪ {a′i} 9: if A(t)i = A (t+1) i for all i ∈ [N ] then 10: return Π Compared to Theorem 6, our second term has an additional A factor, which is quite reasonable considering that algorithms for learning ϵ-CE take Õ(A2ϵ−2) samples, also A-times larger than the ϵ-CCE rate. 6 REDUCTION-BASED ALGORITHMS While Algorithm 2 and 3 make use of one specific no-regret algorithm, namely Hedge (Exponential Weights), in this section, we show that arbitrary algorithms for finding CCE/CE can be augmented to find rationalizable CCE/CE. The sample complexity obtained via this reduction is comparable with those of Algorithm 2 and 3 when L = Θ(NA), but slightly worse when L≪ NA. Moreover, this black-box approach would enable us to derive algorithms for rationalizable equilibria with more desirable qualities, such as last-iterate convergence, when using equilibria-finding algorithms with these properties. Suppose that we are given a black-box algorithm O that finds ϵ-CCE in arbitrary games. We can then use this algorithm in the following “support expansion” manner. We start with a subgame of only rationalizable actions, which can be identified efficiently with Algorithm 1, and call O to find an ϵ-CCE Π for the subgame. Next, we check for each i ∈ [N ] if the best response to Π−i is contained in Ai. If not, this means that the subgame’s ϵ-CCE may not be an ϵ-CCE for the full game; in this case, the best response to Π−i would be a rationalizable action that we can safely include into the action set. On the other hand, if the best response falls in Ai for all i, we can conclude that Π is also an ϵ-CCE for the original game. The details are given by Algorithm 4, and our main theoretical guarantee is the following. Theorem 13. Algorithm 4 outputs a ∆-rationalizable ϵ-CCE with high probability, using at most NA calls to the black-box CCE algorithm and Õ ( N2A2 min{ϵ2,∆2} ) additional samples. Using similar algorithmic techniques, we can develop a reduction scheme for rationalizable ϵ-CE. The detailed description for this algorithm is deferred to Appendix E. Here we only state its main theoretical guarantee. Theorem 14. There exists an algorithm that outputs a ∆-rationalizable ϵ-CE with high probability, using at most NA calls to a black-box CE algorithm and Õ ( N2A3 min{ϵ2,∆2} ) additional samples. 7 CONCLUSION In this paper, we consider two tasks: (1) learning rationalizable action profiles; (2) learning rationalizable equilibria. For task 1, we propose a conceptually simple algorithm whose sample complexity is significantly better than prior work (Wu et al., 2021). For task 2, we develop the first provably efficient algorithms for learning ϵ-CE and ϵ-CCE that are also rationalizable. Our algorithms are computationally efficient, enjoy sample complexity that scales polynomially with the number of players and are able to avoid iteratively dominated actions completely. Our results rely on several new techniques which might be of independent interests to the community. There remains a gap between our sample complexity upper bounds and the available lower bounds for both tasks, closing which is an important future research problem. ACKNOWLEDGEMENTS This work is supported by Office of Naval Research N00014-22-1-2253. Dingwen Kong is partially supported by the elite undergraduate training program of School of Mathematical Sciences in Peking University. A FURTHER DETAILS ON RATIONALIZABILITY A.1 EQUIVALENCE OF NEVER-BEST-RESPONSE AND STRICT DOMINANCE It is known that for finite normal form games, rationalizable actions are given by iterated elimination of never-best-response actions, which is in fact equivalent to the iterative elimination of strictly dominated actions (Osborne & Rubinstein, 1994, Lemma 60.1). Here, for completeness, we include a proof that the iterative elimination of of actions that are never ∆-best-response gives the same definition as Definition 1. Notice that it suffices to show that for every subgame, the set of never ∆-best response actions and the set of ∆-dominated actions are the same. Proposition A.1. Suppose that an action a ∈ Ai is never a ∆-best response, i.e. ∀Π−i ∈ ∆( ∏ j ̸=iAi), ∃u ∈ ∆(Ai) such that ui (a,Π−i) ≤ ui (u,Π−1)−∆. Then a is also ∆-dominated, i.e. ∃u ∈ ∆(Ai), ∀Π−i ∈ ∆( ∏ j ̸=iAi) ui (a,Π−i) ≤ ui (u,Π−1)−∆. Proof. That a is never a ∆-best response is equivalent to min Π−1 max u {ui (a,Π−i)− ui (u,Π−1)} ≤ −∆. That a is ∆-dominated is equivalent to max u min Π−1 {ui (a,Π−i)− ui (u,Π−1)} ≤ −∆. Equivalence immediately follows from von Neumman’s minimax theorem. A.2 PROOF OF PROPOSITION 1 Proof. We prove this inductively with the following hypothesis: ∀l ≥ 1,∀i ∈ [N ], ∑ a∈Ai x∗i (a) · 1[a ∈ El] ≤ 2lϵ ∆ . Base case: By the definition of ϵ-NE, ∀i ∈ [N ], ∀x′ ∈ ∆(Ai), ui(x ∗ i , x ∗ −i) ≥ ui(x′, x∗−i)− ϵ. Note that if ã ∈ E1 ∩ Ai, ∃x ∈ ∆(Ai) such that ∀a−i, ui(ã, a−i) ≤ ui(x, a−i)−∆. Therefore if we choose x′ := x∗i − ∑ a∈Ai 1[a ∈ E1]x∗i (a)ea + ∑ a∈Ai 1[a ∈ E1]x∗i (a) · x(a), that is if we play the dominating strategy instead of the dominated action in x∗i , then ui(x ′, x∗−i) ≥ ui(x∗i , x∗−i) + ∑ a∈Ai x∗i (a) · 1[a ∈ E1]∆. It follows that ∑ a∈Ai x∗i (a) · 1[a ∈ E1] ≤ ϵ ∆ . Induction step: By the induction hypothesis, ∀i ∈ [N ],∑ a∈Ai x∗i (a) · 1[a ∈ El] ≤ 2lϵ ∆ . Now consider x̃i := x∗i − ∑ a∈Ai 1[a ∈ El] · x ∗ i (a)ea 1− ∑ a∈Ai 1[a ∈ El] · x ∗ i (a) , (∀i ∈ [N ]) which is supported on actions on in El. The induction hypothesis implies ∥x̃i − x∗i ∥1 ≤ 6lϵ/∆. Therefore ∀i ∈ [N ], ∀a ∈ Ai, ∣∣ui(a, x̃−i)− ui(a, x∗−i)∣∣ ≤ 6Nlϵ∆ . Now if ã ∈ (El+1 \ El) ∩ Ai, since x̃−i is not supported on El, ∃x ∈ ∆(Ai) such that ui(ã, x̃−i) ≤ ui(x, x̃−i)−∆. It follows that ui(ã, x ∗ −i) ≤ ui(x, x∗−i)−∆+ 12Nlϵ ∆ ≤ ui(x, x∗−i)− ∆ 2 . Using the same arguments as in the base case,∑ a∈Ai x∗i (a) · 1[a ∈ El+1 \ El] ≤ ϵ ∆− 12Nlϵ∆ ≤ 2ϵ ∆ . It follows that ∀i ∈ [N ], ∑ a∈Ai x∗i (a) · 1[a ∈ El+1] ≤ 2(l + 1)ϵ ∆ . The statement is thus proved via induction on l. B FIND ONE RATIONALIZABLE ACTION PROFILE B.1 PROOF OF PROPOSITION 2 Proof. Consider the following N -player game denoted by G0 with action set [A]: ui (·) = 0 (1 ≤ i ≤ N − 1) uN (aN ) = ∆ · 1[aN > 1]. Specifically, a payoff with mean u is realized by a skewed Rademacher random variable with 1+u2 probability on +1 and 1−u2 on −1. In game G0, clearly for player N , the action 1 is ∆-dominated. However, consider the following game, denoted by Ga∗ (where a∗ ∈ [A]N−1) ui (·) = 0, (1 ≤ i ≤ N − 1) uN (aN ) = ∆, (aN > 1) uN (1, a−N ) = 2∆ · 1[a−N = a∗]. It can be seen that in game Ga∗ , for player N , the action 1 is not dominated or iteratively strictly dominated. Therefore, suppose that an algorithm O is able to determine whether an action is rationalizable (i.e. not iteratively strictly dominated) with 0.9 accuracy, then its output needs to be False with at least 0.9 probability in game G0, but True with at least 0.9 probability in game Ga∗ . By Pinsker’s inequality, KL(O(G0)||O(Ga∗)) ≥ 2 · 0.82 > 1, where we used O(G) to denote the trajectory generated by running algorithm O on game G. Meanwhile, notice that G0 and Ga∗ is different only when the first N − 1 players play a∗. Denote the number of times where the first N − 1 players play a∗ by n(a∗). Using the chain rule of KL-divergence, KL(O(G0)||O(Ga∗)) ≤ EG0 [n(a∗)] ·KL ( Ber ( 1 2 )∥∥∥∥Ber(1 + 2∆2 )) (a) ≤ EG0 [n(a∗)] · 1 1−2∆ 2 · (2∆)2 (b) ≤ 10∆2EG0 [n(a∗)] . Here (a) follows from reverse Pinsker’s inequality (see e.g. Binette (2019)), while (b) uses the fact that ∆ < 0.1. This means that for any a∗ ∈ [A]N−1, EG0 [n(a∗)] ≥ 1 10∆2 . It follows that the expected number of samples when running O on G0 is at least EG0 ∑ a∗∈[A]N−1 n(a∗) ≥ AN−1 10∆2 . B.2 PROOF OF THEOREM 3 Proof. We first present the concentration bound. For l ∈ [L], i ∈ [N ], and a ∈ Ai, by Hoeffding’s inequality we have that with probability at least 1− δLNA ,∣∣∣ui(a, a(l−1)−i )− ûi(a, a(l−1)−i )∣∣∣ ≤ √ 4 ln(ANL/δ) M ≤ ∆ 4 . Therefore by a union bound we have that with probability at least 1− δ, for all l ∈ [L], i ∈ [N ], and a ∈ Ai, ∣∣∣ui(a, a(l−1)−i )− ûi(a, a(l−1)−i )∣∣∣ ≤ ∆4 . We condition on this event for the rest of the proof. We use induction on l to prove that for all l ∈ [L] ∪ {0}, (a(l)1 , · · · , a (l) N ) can survive at least l rounds of IDE. The base case for l = 0 directly holds. Now we assume that the case for 1, 2, . . . , l− 1 holds and consider the case of l. For any i ∈ [N ], we show that a(l)i can survive at least l rounds of IDE. Recall that a (l) i is the empirical best response, i.e. a (l) i = argmax a∈Ai ûi(a, a (l−1) −i ). For any mixed strategy xi ∈ ∆(Ai), we have that ui(a (l) i , a (l−1) −i )− ui(xi, a (l−1) −i ) ≥ûi(a(l)i , a (l−1) −i )− ûi(xi, a (l−1) −i )− ∣∣∣ui(a(l)i , a(l−1)−i )− ûi(a(l)i , a(l−1)−i )∣∣∣− ∣∣∣ui(xi, a(l−1)−i )− ûi(xi, a(l−1)−i )∣∣∣ ≥0− ∆ 4 − ∆ 4 = −∆ 2 . Since actions in a(l−1)−i can survive at least l− 1 rounds of ∆-IDE, a (l) i cannot be ∆-dominated by xi in rounds 1, · · · , l. Since xi can be arbitrarily chosen, a(l)i can survive at least l rounds of ∆-IDE. We can now ensure that the output (a(L)1 , · · · , a (L) N ) survives L rounds of ∆-IDE, which is equivalent to ∆-rationalizability (see Definition 1). The total number of samples used is LNA ·M = Õ ( LNA ∆2 ) . B.3 PROOF OF THEOREM 4 Proof. Without loss of generality, assume that ∆ < 0.1. Consider the following instance where A1 = · · · = AN = [A]: ui(ai) = ∆ · 1[ai = 1], (i ̸= j) uj(aj , a−j) = { ∆ · 1[aj = 1] (a−j ̸= {1}N−1) ∆ · 1[aj = 1] + 2∆ · 1[aj = a] (a−j = {1}N−1) . Denote this instance by Gj,a. Additionally, define the following instance G0: ui(ai) = ∆ · 1[ai = 1]. (∀i ∈ [N ]) As before, a payoff with expectation u is realized as a random variable with distribution 2Ber( 1+u2 )−1. It can be seen that the only difference between G0 and Gj,a lies in uj(a, {1}N−1). By the KLdivergence chain rule, for any algorithm O, KL (O(G0)∥O(Gj,a)) ≤ 10∆2 · EG0 [ n(aj = a, a−j = {1}N−1) ] , where n(aj = a, a−j = {1}N−1) denotes the number of times the action profile (a, 1N−1) is played. Note that in G0, the only action profile surviving two rounds of ∆-IDE is (1, · · · , 1), while in Gj,a, the only rationalizable action profile is (1, · · · , 1︸ ︷︷ ︸ j−1 , a, 1, · · · , 1). To guarantee 0.9 accuracy, by Pinsker’s inequality, KL (O(G0)||O(Gj,a)) ≥ 1 2 |O(G0)−O(Gj,a)|2 > 1. It follows that ∀j ∈ [N ], a > 1, EG0 [ n(aj = a, a−j = {1}N−1) ] ≥ 1 10∆2 . Thus the total expected sample complexity is at least∑ a>1,j∈[N ] EG0 [ n(aj = a, a−j = {1}N−1) ] ≥ N(A− 1) 10∆2 . C OMITTED PROOFS IN SECTION 4 We start our analysis by bounding the sampling noise. For player i ∈ [N ], action ai ∈ Ai, and τ ∈ [T ], we denote the sampling noise as ξ (τ) i (ai) := u (τ) i (ai)− ui(ai, θ (τ) −i ). We have the following lemma. Lemma C.1. Let Ω1 denote the event that for all t ∈ [T ], i ∈ [N ], and ai ∈ Ai,∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai) ∣∣∣∣∣ ≤ 2 √√√√ln(ANT/δ) t∑ τ=1 1 Mτ . Then Pr[Ω1] ≥ 1− δ. Proof. Note that ∑t τ=1 ξ (τ) i (ai) can be written as the sum of ∑t τ=1 Mτ mean-zero bounded terms. By Azuma-Hoeffding inequality, with probability at least 1 − δANT , for a fixed i ∈ [N ], t ∈ [T ], ai ∈ Ai, ∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai) ∣∣∣∣∣ ≤ 2 √√√√ln(ANT/δ) t∑ τ=1 Mτ · ( 1 Mτ )2 . (6) A union bound over i ∈ [N ], t ∈ [T ], ai ∈ Ai proves the statement. Lemma C.2. With probability at least 1− 2δ, for all t ∈ [T ] and all i ∈ [N ], ai ∈ Ai ∩ EL, θ (t) i (ai) ≤ p. Proof. We condition on the event Ω1 defined in Lemma C.1 and the success of Algorithm 1. We prove the claim by induction in t. The base case for t = 1 holds directly by initialization. Now we assume the case for 1, 2, . . . , t holds and consider the case of t+ 1. Consider a fixed player i ∈ [N ] and iteratively dominated action ai ∈ Ai ∩EL. By definition there exists a mixed strategy xi such that for all a−i ∩ EL = ∅, ui(xi, a−i) ≥ ui(ai, a−i) + ∆. Therefore for τ ∈ [t], by the induction hypothesis for τ , ui(xi, θ (τ) −i ) ≥ ui(ai, θ (τ) −i ) + (1−ANp) ·∆−ANp ≥ ui(ai, θ(τ)−i ) + ∆/2. (7) Consequently, t∑ τ=1 (u (τ) i (xi)− u (τ) i (ai)) ≥ t∑ τ=1 (ui(xi, θ (τ) −i )− ui(ai, θ (τ) −i ))− 4 · √√√√ln(ANT/δ) t∑ τ=1 1 Mτ (By (6)) ≥ t∆ 2 − 4 · √√√√ln(ANT/δ) t∑ τ=1 1 Mτ (By (7)) ≥ t∆ 4 . Therefore by our choice of learning rate, θ (t+1) i (ai) ≤ exp ( −ηt · t∑ τ=1 ( u (τ) i (xi)− u (τ) i (ai) )) ≤ exp ( −4 ln(1/p) ∆t · ∆t 4 ) = p. Therefore θ (t+1) i (ai) ≤ p as desired. Now we turn to the ϵ-CCE guarantee. For a player i ∈ [N ], recall that the regret is defined as RegretiT = max θ∈∆(Ai) T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩. Lemma C.3. The regret can be bounded as RegretiT ≤ O (√ lnA · T + ln(1/p) lnT ∆ ) . Proof. Note that apart from the choice of θ(1), we are exactly running FTRL with learning rates ηt = max {√ lnA/t, 4 ln(1/p) ∆t } , which are monotonically decreasing. Therefore following the standard analysis of FTRL (see, e.g., Orabona (2019, Corollary 7.9)), we have max θ∈∆(Ai) T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ≤ 2 + lnA ηT + 1 2 T∑ t=1 ηt ≤ 2 + √ lnA · T + 1 2 T∑ t=1 (√ lnA t + 4 ln(1/p) ∆t ) = O (√ lnA · T + ln(1/p) lnT ∆ ) . However, this form of regret cannot directly imply approximate CCE. We define the following expected version regret Regreti,⋆T = max θ∈∆(Ai) T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩. The next lemma bound the difference between these two types of regret Lemma C.4. The following event Ω2 holds with probability at least 1− δ: for all i ∈ [N ]∣∣∣Regreti,⋆T − RegretiT ∣∣∣ ≤ O (√T · ln(NA/δ)) . Proof. We denote Θi := {e1, e2, . . . , e|Ai|} Therefore we have∣∣∣Regreti,⋆T − RegretiT ∣∣∣ = ∣∣∣∣∣ maxθ∈∆(Ai) T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩ − max θ∈∆(Ai) T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ∣∣∣∣∣ = ∣∣∣∣∣maxθ∈Θi T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩ −max θ∈Θi T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ∣∣∣∣∣ =max θ∈Θi ∣∣∣∣∣ T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩ − T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ∣∣∣∣∣ =max θ∈Θi ∣∣∣∣∣ T∑ t=1 ⟨ui(·, θ(t)−i)− u (t) i , θ − θ (t) i ⟩ ∣∣∣∣∣ Note that ⟨ui(·, θ(t)−i) − u (t) i , θ − θ (t) i ⟩ is a bounded martingale difference sequence. By AzumaHoeffding’s inequality, for a fixed θ ∈ Θi, with probability at least 1− δAN ,∣∣∣∣∣ T∑ t=1 ⟨ui(·, θ(t)−i)− u (t) i , θ − θ (t) i ⟩ ∣∣∣∣∣ ≤ O (√T · ln(NA/δ)) Thus we complete the proof by a union bound. Proof of Theorem 6. We condition on event Ω1 defined Lemma C.1, event Ω2 defined in Lemma C.4, and the success of Algorithm 1. Coarse Correlated Equilibria. By Lemma C.3 and Lemma C.4 we know that for all i ∈ [N ], Regreti,⋆T ≤ O (√ lnA · T + ln(1/p) lnT ∆ + √ T · ln(NA/δ) ) . Therefore choosing T = Θ ( ln(NA/δ) ϵ2 + ln2(NA/∆ϵδ) ∆ϵ ) will guarantee that Regreti,⋆T is at most ϵT/2 for all i ∈ [N ]. In this case the average strategy ( ∑T t=1⊗Ni=1θ (t) i )/T would be an (ϵ/2)-CCE. Finally, in the clipping step, ∥θ̄(t)i −θ (t) i ∥1 ≤ 2pA ≤ ϵ4N for all i ∈ [N ], t ∈ [T ]. Thus for all t ∈ [T ], we have ∥ ⊗ni=1 θ̄ (t) i −⊗ni=1θ (t) i ∥1 ≤ ϵ4 , which further implies∥∥∥∥∥( T∑ t=1 ⊗ni=1θ̄ (t) i )/T − ( T∑ t=1 ⊗ni=1θ (t) i )/T ∥∥∥∥∥ 1 ≤ ϵ 4 . Therefore the output strategy Π = ( ∑T t=1⊗Ni=1θ̄ (t) i )/T is an ϵ-CCE. Rationalizability. By Lemma C.2, if a ∈ EL ∩ Ai, θ(t)i (a) ≤ p for all t ∈ [T ]. It follows that θ̄ (t) i (a) = 0, i.e., the action would not be the support in the output strategy Π = ( ∑T t=1⊗Ni=1θ̄ (t) i )/T . Sample complexity. The total number of full-information queries is T∑ t=1 Mt ≤ T + T∑ t=1 64 ln(ANT/δ) ∆2t ≤ T + Õ ( 1 ∆2 ) = Õ ( 1 ∆2 + 1 ϵ2 ) . The total sample complexity for CCE learning would then be NA · T∑ t=1 Mt = Õ ( NA ϵ2 + NA ∆2 ) . Finally consider the cost of finding one IDE-surviving action profile (Õ ( LNA ∆2 ) ) and we get the claimed rate. D OMITTED PROOFS IN SECTION 5 Similar to the CCE case we first bound the sampling noise. For action ai ∈ Ai, and τ ∈ [T ], we denote the sampling noise as ξ (τ) i (ai) := u (τ) i (ai)− ui(ai, θ (τ) −i ). In the CE case, we are interested in the weighted sum of noise ∑t τ=1 ξ (τ) i (ai)θ (τ) i (bi), which is bounded in the following lemma. Lemma D.1. The following event Ω3 holds with probability at least 1− δ: for all t ∈ [T ], i ∈ [N ], and ai ∈ Ai, ∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai)θ (τ) i (bi) ∣∣∣∣∣ ≤ ∆4 t∑ τ=1 θ (τ) i (bi). Proof. Note that ∑t τ=1 ξ (τ) i (ai)θ (τ) i (bi) can be written as the sum of ∑t τ=1 M τ i mean-zero bounded terms. Precisely, there are Mτi terms bounded by θ (τ) i (bi) Mτi . By the Azuma-Hoeffding inequality, we have that with probability at least 1− δA2NT ,∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai)θ (τ) i (bi) ∣∣∣∣∣ ≤ 2 · √√√√ln(ANT/δ) t∑ τ=1 Mτi · ( θ (τ) i (bi) Mτi )2 = 2 · √√√√ln(ANT/δ) t∑ τ=1 (θ (τ) i (bi)) 2 Mτi ≤ ∆ 4 · √√√√ t∑ τ=1 θ (τ) i (bi) τ∑ j=1 θ (j) i (bi) ≤ ∆ 4 t∑ τ=1 θ (τ) i (bi) Therefore by a union bound we complete the proof. Lemma D.2. With probability at least 1− 2δ, for all t ∈ [T ], all i ∈ [N ], and all ai ∈ Ai ∩ EL, θ (t) i (ai) ≤ p Proof. We condition on the event Ω3 defined in Lemma D.1 and the success of Algorithm 1. We prove the claim by induction in t. The base case for t = 1 holds directly by initialization. Now we assume the case for 1, 2, . . . , t holds and consider the case of t+ 1. Consider a fixed player i ∈ [N ], an iteratively dominated action ai ∈ Ai ∩ EL, and an expert bi. By definition there exists a mixed strategy xi such that for all a−i ∩ EL = ∅, ui(xi, a−i) ≥ ui(ai, a−i) + ∆ Therefore for τ ∈ [t], by induction hypothesis we have ui(xi, θ (τ) −i ) ≥ ui(ai, θ (τ) −i ) + (1−ANp) ·∆−ANp ≥ ui(ai, θ(τ)−i ) + ∆/2 Thus we have t∑ τ=1 (u (τ) i (xi)− u (τ) i (ai)) · θ (τ) i (bi) ≥ t∑ τ=1 (ui(xi, θ (τ) −i )− ui(ai, θ (τ) −i )) · θ (τ) i (bi)− ∆ 4 t∑ τ=1 θ (τ) i (bi) ≥∆ 2 t∑ τ=1 θ (τ) i (bi)− ∆ 4 t∑ τ=1 θ (τ) i (bi) = ∆ 4 t∑ τ=1 θ (τ) i (bi) By our choice of learning rate, θ̂ (t+1) i (ai|bi) ≤ exp ( −ηbit,i · t∑ τ=1 θ (τ) i (bi) ( u (τ) i (xi)− u (τ) i (ai) )) ≤ exp ( − 4 ln(1/p) ∆ ∑t τ=1 θ (τ) i (b) · ∆ 4 t∑ i=1 θ (τ) i (b) ) = p. Therefore we conclude θ (t+1) i (ai) = ∑ bi∈Ai θ̂ (t+1) i (ai|bi)θ (t+1) i (bi) ≤ p Now we turn to the ϵ-CE guarantee. For a player i ∈ [N ], recall that the swap-regret is defined as SwapRegretiT := sup ϕ:Ai→Ai T∑ t=1 ∑ b∈Ai θ (t) i (b)u (t) i (ϕ(b))− T∑ t=1 〈 θ (t) i , u (t) i 〉 . Lemma D.3. For all i ∈ [N ], the swap-regret can be bounded as SwapRegretiT ≤ O (√ A ln(A)T + A ln(NAT/∆ϵ)2 ∆ ) . Proof. For i ∈ [N ], recall that the regret for an expert b ∈ Ai is defined as Regreti,bT := max a∈Ai T∑ t=1 θ (t) i (b)u (t) i (a)− T∑ t=1 〈 θ̂ (t) i (·|b), θ (t) i (b)u (t) i 〉 . Since θ(t)i (a) = ∑ b∈Ai θ̂ (t) i (a|b)θ (t) i (b) for all a and all t > 1,∑ b∈Ai Regreti,bT = ∑ b∈Ai max ab∈Ai T∑ t=1 θ (t) i (b)u (t) i (ab)− ∑ b∈Ai T∑ t=1 〈 θ̂ (t) i (·|b)θ (t) i (b), u (t) i 〉 = max ϕ:Ai→Ai ∑ b∈Ai T∑ t=1 θ (t) i (b)u (t) i (ϕ(b))− T∑ t=1 〈∑ b∈Ai θ̂ (t) i (·|b)θ (t) i (b), u (t) i 〉 ≥ max ϕ:Ai→Ai T∑ t=1 ∑ b∈Ai θ (t) i (b)u (t) i (ϕ(b))− T∑ t=2 〈 θ (t) i , u (t) i 〉 − 1 ≥ SwapRegretiT − 1. It now suffices to control the regret of each individual expert. For expert b, we are essentially running FTRL with learning rates ηbt,i := max { 4 ln(1/p) ∆ ∑t τ=1 θ (τ) i (b) , √ A lnA√ t } , which are clearly monotonically decreasing. Therefore using standard analysis of FTRL (see, e.g., Orabona (2019, Corollary 7.9)), Regreti,bT ≤ lnA ηbT,i + T∑ t=1 ηbt,i · θ (t) i (b) 2 ≤ √ T lnA A + T∑ t=1 θ (t) i (b) · √ A lnA t + 4 ln(1/p) ∆ · T∑ t=1 θ (t) i (b)∑t τ=1 θ (τ) i (b) ≤ √ T lnA A + T∑ t=1 θ (t) i (b) · √ A lnA t + 4 ln(1/p) ∆ ( 1 + ln ( T p )) . Here we used the fact that ∀b ∈ Ai, θ(1)i (b) ≥ p, and T∑ t=1 θ (t) i (b)∑τ i=1 θ (τ) i (b) ≤ 1 + ∫ ∑T t=1 θ (t) i (b) θ (1) i (b) ds s = 1 + ln (∑T t=1 θ (t) i (b) θ (1) i (b) ) ≤ 1 + ln ( T p ) . Notice that ∑ b∈Ai ∑T t=1 θ (t) i (b) · √ A lnA t ≤ O( √ A ln(A)T ). Therefore SwapRegretiT ≤ O(1) + ∑ b∈Ai Regreti,bT ≤ O (√ A ln(A)T + A ln(NAT/∆ϵ)2 ∆ ) . (8) Similar to the CCE case,, this form of regret can not directly imply approximate CE. We define the following expected version regret SwapRegreti,⋆T := sup ϕ:Ai→Ai T∑ t=1 〈 ϕ ◦ θ(t)i , ui(·, θ (t) −i) 〉 − T∑ t=1 〈 θ (t) i , ui(·, θ (t) −i) 〉 The next lemma bound the difference between these two types of regret Lemma D.4. The following event Ω4 has probability at least 1− δ: for all i ∈ [N ],∣∣∣SwapRegreti,⋆T − SwapRegretiT ∣∣∣ ≤ O (√ AT ln ( AN δ )) . Proof. Note that∣∣∣SwapRegreti,⋆T − SwapRegretiT ∣∣∣ = ∣∣∣∣∣ supϕ:Ai→Ai T∑ t=1 〈 ϕ ◦ θ(t)i − θ (t) i , ui(·, θ (t) −i) 〉 − sup ϕ:Ai→Ai T∑ t=1 〈 ϕ ◦ θ(t)i − θ (t) i , u (t) i 〉∣∣∣∣∣ ≤ sup ϕ:Ai→Ai ∣∣∣∣∣ T∑ t=1 〈 ϕ ◦ θ(t)i − θ (t) i , ui(·, θ (t) −i)− u (t) i 〉∣∣∣∣∣ . Notice that E[u(t)i ] = ui ( ·, θ(t)−i ) , and that u(t)i ∈ [−1, 1]A. Therefore, ∀ϕ : Ai → Ai, ξϕt := 〈 ϕ ◦ θ(t)i − θ (t) i , ui ( ·, θ(t)−i ) − u(t)i 〉 is a bounded martingale difference sequence. By AzumaHoeffding inequality, for a fixed ϕ : Ai → Ai, with probability 1− δ′,∣∣∣∣∣ T∑ t=1 ξϕt ∣∣∣∣∣ ≤ 2 √ 2T ln ( 2 δ′ ) . By setting δ′ = δ/(NAA), we get with probability 1− δ/N , ∀ϕ : Ai → Ai,∣∣∣∣∣ T∑ t=1 ξϕt ∣∣∣∣∣ ≤ 2 √ 2AT ln ( 2AN δ ) . Therefore we complete the proof by a union bound over i ∈ [N ]. Proof of Theorem 12. We condition on event Ω3 defined Lemma D.1, event Ω4 defined in Lemma D.4, and the success of Algorithm 1. Correlated Equilibrium. By Lemma D.3 and Lemma D.4 we know that for all i ∈ [N ], SwapRegreti,⋆T ≤ O (√ A ln(A)T + A ln(NAT/∆ϵ)2 ∆ + √ AT ln ( AN δ )) . Therefore choosing T = Θ ( A ln ( AN δ ) ϵ2 + A ln3 ( NA ∆ϵδ ) ∆ϵ ) will guarantee that SwapRegreti,⋆T is at most ϵT/2 for all i ∈ [N ]. In this case the average strategy ( ∑T t=1⊗Ni=1θ (t) i )/T would be an ϵ/2-CE. Finally, in the clipping step, ∥θ̄(t)i −θ (t) i ∥1 ≤ 2pA ≤ ϵ4N for all i ∈ [N ], t ∈ [T ]. Thus for all t ∈ [T ], we have ∥ ⊗ni=1 θ̄ (t) i −⊗ni=1θ (t) i ∥1 ≤ ϵ4 , which further implies∥∥∥∥∥( T∑ t=1 ⊗ni=1θ̄ (t) i )/T − ( T∑ t=1 ⊗ni=1θ (t) i )/T ∥∥∥∥∥ 1 ≤ ϵ 4 . Therefore the output strategy Π = ( ∑T t=1⊗Ni=1θ̄ (t) i )/T is an ϵ-CE. Rationalizability. By Lemma D.2, if a ∈ EL ∩ Ai, θ(t)i (a) ≤ p for all t ∈ [T ]. It follows that θ̄ (t) i (a) = 0, i.e., the action would not be the support in the output strategy Π = ( ∑ t⊗iθ̄ (t) i )/T . Sample complexity. The total number of queries is∑ i∈[N ] T∑ t=1 AM (t) i ≤ NAT + ∑ i∈[N ] ∑ b∈Ai T∑ t=1 16θ (t) i (b) ∆2 · ∑t τ=1 θ (τ) i (b) ≤ NAT + 16NA 2 ∆2 · ln(T/p) ≤ Õ ( NA2 ϵ2 + NA2 ∆2 ) , where we used the fact that T∑ t=1 θ (t) i (a)∑τ i=1 θ (τ) i (a) ≤ 1 + ln ( T p ) . Finally consider the cost of finding one IDE-surviving action profile (Õ ( LNA ∆2 ) ) and we get the claimed rate. E DETAILS FOR REDUCTION ALGORITHMS In this section, we present the details for the reduction based algorithm for finding rationalizable CE (Algorithm 5) and analysis of both Algorithm 4 and 5. E.1 RATIONALIZABLE CCE VIA REDUCTION We will choose ϵ′ = min{ϵ,∆}3 , M = ⌈ 4 ln(2NA/δ) ϵ′2 ⌉ . Lemma E.1. With probability 1−δ, throughout the execution of Algorithm 4, for every t and i ∈ [N ], a′i ∈ Ai, |ûi(a′i,Π−i)
1. What is the focus of the paper regarding game theory and learning algorithms? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its application to a specific topic? 3. Do you have any questions or concerns about the technical tools used in the analysis, such as their standardness or lack of emphasis on technical novelty? 4. Are there any minor issues or suggestions you have for improving the paper's clarity, such as unclear notation or missing definitions? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies the problem of designing no-regret learning algorithms that provably converge to Coarse Correlated Equilibria (CCEs) and Correlated Equilibria (CEs) that avoid playing actions that are non-rationalizable, i.e., they do not survive iterative elimination od dominated actions. The paper focuses on normal-form games with stochastic payoffs, under the bandit feedback model. First, it provides an algorithm that can be used to learn any rationalizable action profile. Then, it uses such an algorithm as a subroutine to design sample-efficient no-regret learning algorithms for CCEs and CEs. The paper provides two variants of such algorithms: one is based on a modification of the Hedge algorithm, while the other works by using any standard no-regret algorithms as a subroutine, at the expense of achieving "slightly" worse regret bounds. The paper also provides a lower bound showing that the sample complexity of the first variant of the algorithms is optimal with respect to the number of players and the number of actions. Strengths And Weaknesses STRENGTHS The paper studies an interesting problem that is largely overlooked in the literature. While most of the papers focus on learning equilibria in games, with the goal of finding AN (approximate) equilibrium, I believe that focusing on how to learn equilibria with desirable properties is a crucial property for operationalizing equilibrium-learning algorithms in practice. The paper is well written and fairly easy to follow. As far as I am concerned, the technical results are sound. WEAKNESSES The paper addresses a very specific topic that could be interesting only for a very narrow parto of the ICLR community. The technical tools used to prove the results seem quite standard, or perhaps the authors did not stress sufficiently the technical novelty of their results. MINOR ISSUES The conditioning inside the expectation at the beginning of Section 2 is not clear. The notation u_i(x_i,x_{-i)) and u_i(x_i,x_{-I}) has not been formally introduced. It is not clear how to set L in the algorithms, do you always have to set it to be equal to its maximum value N x A? The notation with \theta inside Algorithms 2 and 3 is not clear, I had to re-read them several times in order to grasp the meaning. I think it is better to introduce it in the main text, together with a short description in words. I would add formal definitions of \Delta-rationalizable \epsilon-CE and \epsilon-CCE. Clarity, Quality, Novelty And Reproducibility CLARITY: The paper is well written and fairly easy to follow. QUALITY: As far as I am concerned, the technical results are sound. NOVELTY: The problem faced by the paper has not been addresses in the literature yet. The technical novelty of the results is not clear, the authors should do a better job in "selling" their technical contributions. REPRODUCIBILITY: I believe the proofs can be easily reproduced. There are no experiments.
ICLR
Title Learning Rationalizable Equilibria in Multiplayer Games Abstract A natural goal in multi-agent learning is to learn rationalizable behavior, where players learn to avoid any Iteratively Dominated Action (IDA). However, standard no-regret based equilibria-finding algorithms could take exponential samples to find such rationalizable strategies. In this paper, we first propose a simple yet sample-efficient algorithm for finding a rationalizable action profile in multi-player general-sum games under bandit feedback, which substantially improves over the results of Wu et al. (2021). We further develop algorithms with the first efficient guarantees for learning rationalizable Coarse Correlated Equilibria (CCE) and Correlated Equilibria (CE). Our algorithms incorporate several novel techniques to guarantee the elimination of IDA and no (swap-)regret simultaneously, including a correlated exploration scheme and adaptive learning rates, which may be of independent interest. We complement our results with a sample complexity lower bound showing the sharpness of our guarantees. 1 INTRODUCTION A common objective in multi-agent learning is to find various equilibria, such as Nash equilibria (NE), correlated equilibria (CE) and coarse correlated equilibria (CCE). Generally speaking, a player in equilibrium lacks incentive to deviate assuming conformity of other players to the same equilibrium. Equilibrium learning has been extensively studied in the literature of game theory and online learning, and no-regret based learners can provably learn approximate CE and CCE with both computational and statistical efficiency (Stoltz, 2005; Cesa-Bianchi & Lugosi, 2006). However, not all equilibria are created equal. As shown by Viossat & Zapechelnyuk (2013), a CCE can be entirely supported on dominated actions—actions that are worse off than some other strategy in all circumstances—which rational agents should apparently never play. Approximate CE also suffers from a similar problem. As shown by Wu et al. (2021, Theorem 1), there are examples where an ϵ-CE always plays iteratively dominated actions—actions that would be eliminated when iteratively deleting strictly dominated actions—unless ϵ is exponentially small. It is also shown that standard no-regret algorithms are indeed prone to finding such seemingly undesirable solutions (Wu et al., 2021). The intrinsic reason behind this is that CCE and approximate CE may not be rationalizable, and existing algorithms can indeed fail to find rationalizable solutions. Different from equilibria notions, rationalizability (Bernheim, 1984; Pearce, 1984) looks at the game from the perspective of a single player without knowledge of the actual strategies of other players, and only assumes common knowledge of their rationality. A rationalizable strategy will avoid strictly dominated actions, and assuming other players have also eliminated their dominated actions, iteratively avoid strictly dominated actions in the subgame. Rationalizability is a central solution concept in game theory (Osborne & Rubinstein, 1994) and has found applications in auctions (Battigalli & Siniscalchi, 2003) and mechanism design (Bergemann et al., 2011). If an (approximate) equilibrium only employs rationalizable actions, it would prevent irrational behavior such as playing dominated actions. Such equilibria are arguably more reasonable than ∗Equal contribution. unrationalizable ones, and constitute a stronger solution concept. This motivates us to consider the following open question: Can we efficiently learn equilibria that are also rationalizable? Despite its fundamental role in multi-agent reasoning, rationalizability is rarely studied from a learning perspective until recently, with Wu et al. (2021) giving the first algorithm for learning rationalizable strategies from bandit feedback. However, the problem of learning rationalizable CE and CCE remains a challenging open problem. Due to the existence of unrationalizable equilibria, running standard CE or CCE learners will not guarantee rationalizable solutions. On the other hand, one cannot hope to first identify all rationalizable actions and then find an equilibrium on the subgame, since even determining whether an action is rationalizable requires exponentially many samples (see Proposition 2). Therefore, achieving rationalizability and approximate equilibria simultaneously is nontrivial and presents new algorithmic challenges. In this work, we address the challenges above and give a positive answer to our main question. Our contributions can be summarized as follows: • As a first step, we provide a simple yet sample-efficient algorithm for identifying a ∆- rationalizable 1 action profile under bandit feedback, using only Õ ( LNA ∆2 ) 2 samples in normal- form games with N players, A actions per player and a minimum elimination length of L. This greatly improves the result of Wu et al. (2021) and is tight up to logarithmic factors when L = O(1). • Using the above algorithm as a subroutine, we develop exponential weights based algorithms that can provably find ∆-rationalizable ϵ-CCE using Õ ( LNA ∆2 + NA ϵ2 ) samples, and ∆-rationalizable ϵ-CE using Õ ( LNA ∆2 + NA2 min{ϵ2,∆2} ) samples. To the best of our knowledge, these are the first guarantees for learning rationalizable approximate CCE and CE. • We also provide reduction schemes that find ∆-rationalizable ϵ-CCE/CE using black-box algorithms for ϵ-CCE/CE. Despite having slightly worse rates, these algorithms can directly leverage the progress in equilibria finding, which may be of independent interest. 1.1 RELATED WORK Rationalizability and iterative dominance elimination. Rationalizability (Bernheim, 1984; Pearce, 1984) is a notion that captures rational reasoning in games and relaxes Nash Equilibrium. Rationalizability is closely related to the iterative elimination of dominated actions, which has been a focus of game theory research since the 1950s (Luce & Raiffa, 1957). It can be shown that an action is rationalizable if and only if it survives iterative elimination of strictly dominated actions3 (Pearce, 1984). There is also experimental evidence supporting iterative elimination of dominated strategies as a model of human reasoning (Camerer, 2011). Equilibria learning in games. There is a rich literature on applying online learning algorithms to learning equilibria in games. It is well-known that if all agents have no-regret, the resulting empirical average would be an ϵ-CCE (Young, 2004), while if all agents have no swap-regret, the resulting empirical average would be an ϵ-CE (Hart & Mas-Colell, 2000; Cesa-Bianchi & Lugosi, 2006). Later work continuing this line of research include those with faster convergence rates (Syrgkanis et al., 2015; Chen & Peng, 2020; Daskalakis et al., 2021), last-iterate convergence guarantees (Daskalakis & Panageas, 2018; Wei et al., 2020), and extension to extensive-form games (Celli et al., 2020; Bai et al., 2022b;a; Song et al., 2022) and Markov games (Song et al., 2021; Jin et al., 2021). Computational and learning aspect of rationalizability. Despite its conceptual importance, rationalizability and iterative dominance elimination are not well studied from a computational or learning perspective. For iterative strict dominance elimination in two-player games, Knuth et al. (1988) provided a cubic-time algorithm and proved that the problem is P-complete. The weak dominance version of the problem is proven to be NP-complete by Conitzer & Sandholm (2005). 1An action is ∆-rationalizable if it survives iterative elimination of ∆-dominated actions; c.f. Definition 1. 2Throughout this paper, we use Õ to suppress logarithmic factors in N , A, L, 1 ∆ , 1 δ , and 1 ϵ . 3For this equivalence to hold, we need to allow dominance by mixed strategies, and correlated beliefs when there are more than two players. These conditions are met in the setting of this work. Hofbauer & Weibull (1996) showed that in a class of learning dynamics which includes replicator dynamics — the continuous-time variant of Follow-The-Regularzied-Leader (FTRL), all iteratively strictly dominated actions vanish over time, while Mertikopoulos & Moustakas (2010) proved similar results for stochastic replicator dynamics; however, neither work provides finite-time guarantees. Cohen et al. (2017) proved that Hedge eliminates dominated actions in finite time, but did not extend their results to the more challenging case of iteratively dominated actions. The most related work in literature is the work on learning rationalizable actions by Wu et al. (2021), who proposed the Exp3-DH algorithm to find a strategy mostly supported on rationalizable actions with a polynomial rate. Our Algorithm 1 accomplishes the same task with a faster rate, while our Algorithms 2 & 3 deal with the more challenging problems of finding ϵ-CE/CCE that are also rationalizable. Although Exp3-DH is based on a no-regret algorithm, it does not enjoy regret or weighted regret guarantees and thus does not provably find rationalizable equilibria. 2 PRELIMINARY An N -player normal-form game involves N players whose action space are denoted by A = A1× · · ·×AN , and is defined by utility functions u1, · · · , uN :A → [0, 1]. Let A = maxi∈[N ] |Ai| denote the maximum number of actions per player, xi denote a mixed strategy of the i-th player (i.e., a distribution over Ai) and x−i denote a (correlated) mixed strategy of the other players (i.e., a distribution over ∏ j ̸=iAj). We further denote ui(xi, x−i) := Eai∼xi,a−i∼x−iui(ai, a−i). We use ∆(S) to denote a distribution over the set S. Learning from bandit feedback We consider the bandit feedback setting where in each round, each player i ∈ [N ] chooses an action ai ∈ Ai, and then observes a random feedback Ui ∈ [0, 1] such that E[Ui|a1, a2, · · · , an] = ui(a1, a2, · · · , an). 2.1 RATIONALIZABILITY An action a ∈ Ai is said to be rationalizable if it could be the best response to some (possibly correlated) belief of other players’ strategies, assuming that they are also rational. In other words, the set of rationalizable actions is obtained by iteratively removing actions that could never be a best response. For finite normal-form games, this is in fact equivalent to the iterative elimination of strictly dominated actions4 (Osborne & Rubinstein, 1994, Lemma 60.1). Definition 1 (∆-Rationalizability). 5 Define E1 := ⋃N i=1 {a ∈ Ai : ∃x ∈ ∆(Ai),∀a−i, ui(a, a−i) ≤ ui(x, a−i)−∆} , which is the set of ∆-dominated actions for all players. Further define El := ⋃N i=1 {a ∈ Ai : ∃x ∈ ∆(Ai),∀a−i s.t. a−i ∩ El−1 = ∅, ui(a, a−i) ≤ ui(x, a−i)−∆} , which is the set of actions that would be eliminated by the l-th round. Define L = inf{l : El+1 = El} as the minimum elimination length, and EL as the set of ∆-iteratively dominated actions (∆-IDAs). Actions in ∪ni=1Ai \ EL are said to be ∆-rationalizable. Notice that E1 ⊆ · · · ⊆ EL = EL+1. Here ∆ plays a similar role as the reward gap for best arm identification in stochastic multi-armed bandits. We will henceforth use ∆-rationalizability and survival of L rounds of iterative dominance elimination (IDE) interchangeably6. Since one cannot eliminate all the actions of a player, |EL| ≥ N , which further implies L ≤ N(A− 1) < NA. 2.2 EQUILIBRIA IN GAMES We consider three common learning objectives, namely Nash Equilibrium (NE), Correlated Equilibrium (CE) and Coarse Correlated Equilibrium (CCE). 4See, e.g., the Diamond-In-the-Rough (DIR) games in Wu et al. (2021, Definition 2) for a concrete example of iterative dominance elimination. 5Here we slightly abuse the notation and use ∆ to refer to both the gap and the probability simplex. 6Alternatively one can also define ∆-rationalizability by the iterative elimination of actions that are never ∆-best response, which is mathematically equivalent to Definition 1 (see Appendix A.1). Definition 2 (Nash Equilibrium). A strategy profile (x1, · · · , xN ) is an ϵ-Nash equilibrium if ui(xi, x−i) ≥ ui(a, x−i)− ϵ,∀a ∈ Ai,∀i ∈ [N ]. Definition 3 (Correlated Equilibrium). A correlated strategy Π ∈ ∆(A) is an ϵ-correlated equilibrium if ∀i ∈ [N ],∀ϕ : Ai → Ai,∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(ai, a−i) ≥ ∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(ϕ(ai), a−i)− ϵ. Definition 4 (Coarse Correlated Equilibrium). A correlated strategy Π ∈ ∆(A) is an ϵ-CCE if ∀i ∈ [N ],∀a′ ∈ Ai,∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(ai, a−i) ≥ ∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(a ′, a−i)− ϵ. When ϵ = 0, the above definitions give exact Nash equilibrium, correlated equilibrium, and coarse correlated equilibrium, respectively. It is well known that ϵ-NE are ϵ-CE, and ϵ-CE are ϵ-CCE. Furthermore, we call an ϵ-CCE/CE that only plays ∆-rationalizable actions a.s. a ∆-rationalizable ϵ-CCE/CE. 2.3 CONNECTION BETWEEN EQUILIBRIA AND RATIONALIZABILITY It is known that all actions in the support of an exact CE are rationalizable (Osborne & Rubinstein, 1994, Lemma 56.2). However, one can easily construct an exact CCE that is supported on dominated (hence, unrationalizable) actions (see e.g. Viossat & Zapechelnyuk (2013, Fig. 3)). One might be tempted to suggest that running a CE solver immediately finds a CE (and hence CCE) that is also rationalizable. However, the connection between CE and rationalizability becomes quite different when it comes to approximate equilibria, which are inevitable in the presence of noise. As shown by Wu et al. (2021, Theorem 1), an ϵ-CE can be entirely supported on iteratively dominated action, unless ϵ = O(2−A). In other words, rationalizability is not guaranteed by running an approximate CE solver unless with an extremely high accuracy. Therefore, finding ϵ-CE and CCE that are simultaneously rationalizable remains a challenging open problem. Since NE is a subset of CE, all actions in the support of an (exact) NE would also be rationalizable. Unlike approximate CE, for ϵ < poly(∆, 1/N, 1/A)), one can show that any ϵ-Nash equilibrium is still mostly supported on rationalizable actions. Proposition 1. If x∗ = (x∗1, · · · , x∗N ) is an ϵ-Nash with ϵ < ∆ 2 24N2A , ∀i, Pra∼x∗i [a ∈ EL] ≤ 2Lϵ ∆ . Therefore, for two-player zero-sum games, it is possible to run an approximate NE solver and automatically find a rationalizable ϵ-NE. However, this method will induce a rather slow rate7, and we will provide a much more efficient algorithm for finding rationalizable ϵ-NE in Section 4. 3 LEARNING RATIONALIZABLE ACTION PROFILES In order to learn a rationalizable CE/CCE, one might suggest identifying the set of all rationalizable actions, and then learn CE or CCE on this subgame. Unfortunately, as shown by Proposition 2, even the simpler problem of deciding whether one single action is rationalizable is statistically hard. Proposition 2. For ∆ < 0.1, any algorithm that correctly decides whether an action is ∆- rationalizable with 0.9 probability needs Ω(AN−1∆−2) samples. This negative result motivates us to consider an easier task: can we at least find one rationalizable action profile sample-efficiently? Formally, we say a action profile (a1, . . . , aN ) is rationalizable if for all i ∈ [N ], ai is a rationalizable action. This is arguably one of the most fundamental tasks regarding rationalizability. For mixed-strategy dominance solvable games (Alon et al., 2021), the unique rationalizable action profile will be the unique NE and also the unique CE of the game. Therefore this easier task per se is still of practical importance. In this section we answer this question in the affirmative. We provide a sample-efficient algorithm which finds a rationalizable action profile using only Õ ( LNA ∆2 ) samples. This algorithm will also serve as an important subroutine for algorithms finding rationalizable CCE/CE in the later sections. 7For two-player zero-sum games, the marginals of any CCE is an NE so NE can be found efficiently. This is not true for general games, where finding NE is computationally hard and takes Ω(2N ) samples. Algorithm 1 Iterative Best Response 1: Initialization: choose a(0)i ∈ Ai arbitrarily for all i ∈ [N ] 2: for l = 1, · · · , L do 3: for i ∈ [N ] do 4: For all a ∈ Ai, play (a, a(l−1)−i ) for M times, compute player i’s average payoff ûi(a, a (l−1) −i ) 5: Set a(l)i ← argmaxa∈Ai ûi(a, a (l−1) −i ) // Computing the empirical best response 6: return (a(L)1 , · · · , a (L) N ) The intuition behind this algorithm is simple: if an action profile a−i can survive l rounds of IDE, then its best response ai (i.e., argmaxa∈Ai ui(a, a−i)) can survive at least l+1 rounds of IDE, since the action ai can only be eliminated after some actions in a−i are eliminated. Concretely, we start from an arbitrary action profile (a(0)1 , . . . , a (0) N ). In each round l ∈ [L], we compute the (empirical) best response of a(l−1)−i for each i ∈ [N ], and use those best responses to construct a new action profile (a(l)1 , . . . , a (l) N ). By constructing iterative best responses, we will end up with an action profile that can survive L rounds of IDE, which means surviving any number of rounds of IDE according to the definition of L. The full algorithm is presented in Algorithm 1, for which we have the following theoretical guarantee. Theorem 3. With M = ⌈ 16 ln(LNA/δ) ∆2 ⌉ , with probability 1− δ, Algorithm 1 returns an action profile that is ∆-rationalizable using a total of Õ ( LNA ∆2 ) samples. Wu et al. (2021) provide the first polynomial sample complexity results for finding rationalizable action profiles. They prove that the Exp3-DH algorithm is able to find a distribution with 1 − ζ fraction supported on ∆-rationalizable actions using Õ ( L1.5N3A1.5 ζ3∆3 ) samples under bandit feedback8. Compared to their result, our sample complexity bound Õ ( LNA ∆2 ) has more favorable dependence on all problem parameters, and our algorithm will output a distribution that is fully supported on rationalizable actions (thus has no dependence on ζ). We further complement Theorem 3 with a sample complexity lower bound showing that the linear dependency on N and A are optimal. This lower bound suggests that the Õ ( LNA ∆2 ) upper bound is tight up to logarithmic factors when L = O(1), and we conjecture that this is true for general L. Theorem 4. Even for games with L ≤ 2, any algorithm that returns a ∆-rationalizable action profile with 0.9 probability needs Ω ( NA ∆2 ) samples. Conjecture 5. The minimax optimal sample complexity for finding a ∆-rationalizable action profile is Θ ( LNA ∆2 ) for games with minimum elimination length L. 4 LEARNING RATIONALIZABLE COARSE CORRELATED EQUILIBRIA (CCE) In this section we introduce our algorithm for efficiently learning rationalizable CCEs. The high-level idea is to run no-regret Hedge-style algorithms for every player, while constraining the strategy inside the rationalizable region. Our algorithm is motivated by the fact that the probability of playing a dominated action will decay exponentially over time in the Hedge algorithm for adversarial bandit under full information feedback (Cohen et al., 2017). The full algorithm description is provided in Algorithm 2, and here we explain several key components in our algorithm design. Correlated Exploration Scheme. In the bandit feedback setting, standard exponential weights algorithms such as EXP3.IX require importance sampling and biased estimators to derive a highprobability regret bound (Neu, 2015). However, such bias could cause a dominating strategy to lose its advantage. In our algorithm we adopt a correlated exploration scheme, which essentially simulates full information feedback by bandit feedback using NA samples. Specifically, at every time step t, 8Wu et al. (2021)’s result allows trade-off between variables via different choice of algorithmic parameters. However, a ζ−1∆−3 factor is unavoidable regardless of choice of parameters. Algorithm 2 Hedge for Rationalizable ϵ-CCE 1: (a⋆1, · · · , a⋆N )← Algorithm 1 2: For all i ∈ [N ], initialize θ(1)i (·)← 1[· = a⋆i ] 3: for t = 1, · · · , T do 4: for i = 1, · · · , N do 5: For all a ∈ Ai, play (a, θ(t)−i) for Mt times, compute player i’s average payoff u (t) i (a) 6: Set θ(t+1)i (·) ∝ exp ( ηt ∑t τ=1 u (τ) i (·) ) 7: For all t ∈ [T ] and i ∈ [N ], eliminate all actions in θ(t)i with probability smaller than p, then renormalize the vector to simplex as θ̄(t)i 8: output: (∑T t=1⊗ni=1θ̄ (t) i ) /T the players take turn to enumerate their action set, while the other players fix their strategies according to Hedge. For i ∈ [N ] and t ≥ 2, we denote θ(t)i the strategy computed using Hedge for player i in round t. Joint strategy (a, θ(t)−i) is played to estimate player i’s payoff u (t) i (a). It is important to note that such correlated scheme does not require any communication between the players—the players can schedule the whole process before the game starts. Rationalizable Initialization and Variance Reduction. We use Algorithm 1, which learns a rationalizable action profile, to give the strategy for the first round. By carefully preserving the disadvantage of any iteratively dominated action, we keep the iterates inside the rationalizable region throughout the whole learning process. To ensure this for every iterate with high probability, a minibatch is used to reduce the variance of the estimator. Clipping. In the final step, we clip all actions with small probabilities, so that iteratively dominated actions do not appear in the output. The threshold is small enough to not affect the ϵ-CCE guarantee. 4.1 THEORETICAL GUARANTEE In Algorithm 2, we choose parameters in the following manner: ηt = max {√ lnA t , 4 ln(1/p) ∆t } ,Mt = ⌈ 64 ln(ANT/δ) ∆2t ⌉ , and p = min{ϵ,∆}8AN . (1) Note that our learning rate can be bigger than the standard learning rate in FTRL algorithms when t is small. The purpose is to guarantee the rationalizability of the iterates from the beginning of the learning process. As will be shown in the proof, this larger learning rate will not hurt the final rate. We now state the theoretical guarantee for Algorithm 2. Theorem 6. With parameters chosen as in Eq.(1) , after T = Õ ( 1 ϵ2 + 1 ϵ∆ ) rounds, with probability 1− 3δ, the output strategy of Algorithm 2 is a ∆-rationalizable ϵ-CCE.The total sample complexity is Õ ( LNA ∆2 + NA ϵ2 ) . Remark 7. Due to our lower bound (Theorem 4), an Õ(NA∆2 ) term is unavoidable since learning a rationalizable action profile is an easier task than learning rationalizable CCE. Based on our Conjecture 5, the additional L dependency is also likely to be inevitable. On the other hand, learning an ϵ-CCE alone only requires Õ( Aϵ2 ) samples, where as in our bound we have a larger Õ( NA ϵ2 ) term. The extra N factor is a consequence of our correlated exploration scheme in which only one player explores at a time. Removing this N factor might require more sophisticated exploration methods and utility estimators, which we leave as future work. Remark 8. Evoking Algorithm 1 requires knowledge of L, which may not be available in practice. In that case, an estimate L′ may be used in its stead. If L′ ≥ L (for instance when L′ = NA), we can recover the current rationalizability guarantee, albeit with a larger sample complexity scaling with L′. If L′ < L, we can still guarantee that the output policy avoids actions in EL′ , which are, informally speaking, actions that would be eliminated with L′ levels of reasoning. 4.1.1 OVERVIEW OF THE ANALYSIS We give an overview of our analysis of Algorithm 2 below. The full proof is deferred to Appendix C. Step 1: Ensure rationalizability. We will first show that rationalizability is preserved at each iterate, i.e., actions in EL will be played with low probability across all iterates. Formally, Lemma 9. With probability at least 1− 2δ, for all t ∈ [T ] and all i ∈ [N ], ai ∈ Ai ∩ EL, we have θ (t) i (ai) ≤ p. Here p is defined in (1). Lemma 9 guarantees that, after the clipping in Line 7 of Algorithm 2, the output correlated strategy be ∆-rationalizable. We proceed to explain the main idea for proving Lemma 9. A key observation is that the set of rationalizable actions, ∪ni=1Ai \EL, is closed under best response—for the i-th player, as long as the other players continue to play actions in∪j ̸=iAj\EL, actions inAi∩EL will suffer from excess losses each round in an exponential weights style algorithm. Concretely, for any a−i ∈ ( ∏ j ̸=iAj) \ EL and any iteratively dominated action ai ∈ Ai ∩ EL, there always exists xi ∈ ∆(Ai) such that ui(xi, a−i) ≥ ui(ai, a−i) + ∆. With our choice of p in Eq. (1), if other players choose their actions from ∪j ̸=iAj\EL with probability 1− pAN , we can still guarantee an excess loss of Ω(∆). It follows that∑t τ=1 u (τ) i (xi)− ∑t τ=1 u (τ) i (ai) ≥ Ω(t∆)− Sampling Noise. However, this excess loss can be obscured by the noise from bandit feedback when t is small. Note that it is crucial that the statement of Lemma 9 holds for all t due to the inductive nature of the proof. As a solution, we use a minibatch of size Mt = Õ ( ⌈ 1∆2t ⌉ ) in the t-th round to reduce the variance of the payoff estimator u(t)i . The noise term can now be upper-bounded with Azuma-Hoeffding by Sampling Noise ≤ Õ (√∑t τ=1 1 Mt ) ≤ O(t∆), Combining this with our choice of the learning rate ηt gives ηt (∑t τ=1 u (τ) i (xi)− ∑t τ=1 u (τ) i (ai) ) ≫ 1. (2) By the update rule of the Hedge algorithm, this implies that θ(t+1)i (ai) ≤ p, which enables us to complete the proof of Lemma 9 via induction on t. Step 2: Combine with no-regret guarantees. Next, we prove that the output strategy is an ϵ-CCE. For a player i ∈ [N ], the regret is defined as RegretiT = maxθ∈∆(Ai) ∑T t=1⟨u (t) i , θ − θ (t) i ⟩. We can obtain the following regret bound by standard analysis of FTRL with changing learning rates. Lemma 10. For all i ∈ [N ], RegretiT ≤ Õ (√ T + 1∆ ) . Here the additive 1/∆ term is the result of our larger Õ(∆−1t−1) learning rate for small t. It follows from Lemma 10 that T = Õ ( 1 ϵ2 + 1 ∆ϵ ) suffices to guarantee that the correlated strategy 1 T (∑T t=1⊗ni=1θ (t) i ) is an (ϵ/2)-CCE. Since pNA = O(ϵ), the clipping step only minorly affects the CCE guarantee and the clipped strategy 1T (∑T t=1⊗ni=1θ̄ (t) i ) is an ϵ-CCE. 4.2 APPLICATION TO LEARNING RATIONALIZABLE NASH EQUILIBRIUM Algorithm 2 can also be applied to two-player zero-sum games to learn a rationalizable ϵ-NE efficiently. Note that in two-player zero-sum games, the marginal distribution of an ϵ-CCE is guaranteed to be a 2ϵ-Nash (see, e.g., Proposition 9 in Bai et al. (2020)). Hence direct application of Algorithm 2 to a zero-sum game gives the following sample complexity bound. Corollary 11. In a two-player zero-sum game, the sample complexity for finding a ∆-rationalizable ϵ-Nash with Algorithm 2 is Õ ( LA ∆2 + A ϵ2 ) . This result improves over a direct application of Proposition 1, which gives Õ ( A3 ∆4 + A ϵ2 ) sample complexity and produces an ϵ-Nash that could still take unrationalizable actions with positive probability. Algorithm 3 Adaptive Hedge for Rationalizable ϵ-CE 1: (a⋆1, · · · , a⋆N )← Algorithm 1 2: For all i ∈ [N ], initialize θ(1)i ← (1− |Ai|p)1[· = a⋆i ] + p1 3: for t = 1, 2, . . . , T do 4: for i = 1, 2, . . . , N do 5: For all a ∈ Ai, play (a, θ(t)−i) for M (t) i times, compute player i’s average payoff u (t) i (a) 6: For all b ∈ Ai, set θ̂(t+1)i (·|b) ∝ exp ( ηbt,i ∑t τ=1 u (τ) i (·)θ (τ) i (b) ) 7: Find θ(t+1)i ∈ ∆(Ai) such that θ (t+1) i (a) = ∑ b∈Ai θ̂ (t+1) i (a|b)θ (t+1) i (b) 8: For all t ∈ [T ] and i ∈ [N ], eliminate all actions in θ(t)i with probability smaller than p, then renormalize the vector to simplex as θ̄(t)i 9: output: (∑T t=1⊗ni=1θ̄ (t) i ) /T 5 LEARNING RATIONALIZABLE CORRELATED EQUILIBRIUM In order to extend our results on ϵ-CCE to ϵ-CE, a natural approach would be augmenting Algorithm 2 with the celebrated Blum-Mansour reduction (Blum & Mansour, 2007) from swap-regret to external regret. In this reduction, one maintains A instances of a no-regret algorithm {Alg1, · · · ,AlgA}. In iteration t, the player would stack the recommendations of the A algorithms as a matrix, denoted by θ̂(t) ∈ RA×A, and compute its eigenvector θ(t) as the randomized strategy in round t. After observing the actual payoff vector u(t), it will pass the weighted payoff vector θ(t)(a)u(t) to algorithm Alga for each a. In this section, we focus on a fixed player i, and omit the subscript i when it’s clear from the context. Applying this reduction to Algorithm 2 directly, however, would fail to preserve rationalizability since the weighted loss vector θ(t)(a)u(t) admit a smaller utility gap θ(t)(a)∆. Specifically, consider an action b dominated by a mixed strategy x. In the payoff estimate of instance a,∑t τ=1 θ (τ)(a) ( u(τ)(b)− u(τ)(x) ) ≳ ∆ ∑t τ=1 θ (τ)(a)− √∑t τ=1 1 M(τ) ≱ 0, (3) which means that we cannot guarantee the elimination of IDAs every round as in Eq (2). In Algorithm 3, we address this by making ∑t τ=1 θ (τ)(a) play the role as t, tracking the progress of each no-regret instance separately. In time step t, we will compute the average payoff vector u(t) based on M (t) samples; then as in the Blum-Mansour reduction, we will update the A instances of Hedge with weighted payoffs θ(t)(a)u(t) and will use the eigenvector of θ̂ as the strategy for the next round. The key detail here is our choice of parameters, which adapts to the past strategies {θ(τ)}tτ=1: M (t) i := ⌈ maxa 64θ (t) i (a) ∆2· ∑t τ=1 θ (τ) i (a) ⌉ , ηat,i := max { 2 ln(1/p) ∆ ∑t τ=1 θ (τ) i (a) , √ A lnA t } , p = min{ϵ,∆}8AN . (4) Compared to Eq (1), we are essentially replacing t with an adaptive ∑t τ=1 θ (τ)(a). We can now improve (3) to∑t τ=1 θ (τ)(a) ( u(τ)(b)− u(τ)(x) ) ≳ ∆ ∑t τ=1 θ (τ)(a)− √∑t τ=1 θ(τ)(a)2 M(τ) ≳ ∆ ∑t τ=1 θ (τ)(a). (5) This together with our choice of ηat allows us to ensure the rationalizability of every iterate. The full algorithm is presented in Algorithm 3. We proceed to our theoretical guarantee for Algorithm 3. The analysis framework is largely similar to that of Algorithm 2. Our choice of M (t)i is sufficient to ensure ∆-rationalizability via AzumaHoeffding inequality, while swap-regret analysis of the algorithm proves that the average (clipped) strategy is indeed an ϵ-CE. The full proof is deferred to Appendix D. Theorem 12. With parameters in Eq. (4), after T = Õ ( A ϵ2 + A ∆2 ) rounds, with probability 1− 3δ, the output strategy of Algorithm 3 is a ∆-rationalizable ϵ-CE . The total sample complexity is Õ ( LNA ∆2 + NA2 min{∆2,ϵ2} ) . Algorithm 4 Rationalizable ϵ-CCE via Black-box Reduction 1: (a⋆1, · · · , a⋆N )← Algorithm 1 2: For all i ∈ [N ], initialize A(1)i ← {a⋆i } 3: for t = 1, 2, . . . do 4: Find an ϵ′-CCE Π with black-box algorithm O in the sub-game Πi∈[N ]A (t) i 5: ∀i ∈ [N ], a′i ∈ Ai, evaluate ui(a′i,Π−i) for M times and compute average ûi(a′i,Π−i) 6: for i ∈ [N ] do 7: Let a′i ← argmaxa∈Ai ûi(a,Π−i) // Computing the empirical best response 8: A(t+1)i ← A (t) i ∪ {a′i} 9: if A(t)i = A (t+1) i for all i ∈ [N ] then 10: return Π Compared to Theorem 6, our second term has an additional A factor, which is quite reasonable considering that algorithms for learning ϵ-CE take Õ(A2ϵ−2) samples, also A-times larger than the ϵ-CCE rate. 6 REDUCTION-BASED ALGORITHMS While Algorithm 2 and 3 make use of one specific no-regret algorithm, namely Hedge (Exponential Weights), in this section, we show that arbitrary algorithms for finding CCE/CE can be augmented to find rationalizable CCE/CE. The sample complexity obtained via this reduction is comparable with those of Algorithm 2 and 3 when L = Θ(NA), but slightly worse when L≪ NA. Moreover, this black-box approach would enable us to derive algorithms for rationalizable equilibria with more desirable qualities, such as last-iterate convergence, when using equilibria-finding algorithms with these properties. Suppose that we are given a black-box algorithm O that finds ϵ-CCE in arbitrary games. We can then use this algorithm in the following “support expansion” manner. We start with a subgame of only rationalizable actions, which can be identified efficiently with Algorithm 1, and call O to find an ϵ-CCE Π for the subgame. Next, we check for each i ∈ [N ] if the best response to Π−i is contained in Ai. If not, this means that the subgame’s ϵ-CCE may not be an ϵ-CCE for the full game; in this case, the best response to Π−i would be a rationalizable action that we can safely include into the action set. On the other hand, if the best response falls in Ai for all i, we can conclude that Π is also an ϵ-CCE for the original game. The details are given by Algorithm 4, and our main theoretical guarantee is the following. Theorem 13. Algorithm 4 outputs a ∆-rationalizable ϵ-CCE with high probability, using at most NA calls to the black-box CCE algorithm and Õ ( N2A2 min{ϵ2,∆2} ) additional samples. Using similar algorithmic techniques, we can develop a reduction scheme for rationalizable ϵ-CE. The detailed description for this algorithm is deferred to Appendix E. Here we only state its main theoretical guarantee. Theorem 14. There exists an algorithm that outputs a ∆-rationalizable ϵ-CE with high probability, using at most NA calls to a black-box CE algorithm and Õ ( N2A3 min{ϵ2,∆2} ) additional samples. 7 CONCLUSION In this paper, we consider two tasks: (1) learning rationalizable action profiles; (2) learning rationalizable equilibria. For task 1, we propose a conceptually simple algorithm whose sample complexity is significantly better than prior work (Wu et al., 2021). For task 2, we develop the first provably efficient algorithms for learning ϵ-CE and ϵ-CCE that are also rationalizable. Our algorithms are computationally efficient, enjoy sample complexity that scales polynomially with the number of players and are able to avoid iteratively dominated actions completely. Our results rely on several new techniques which might be of independent interests to the community. There remains a gap between our sample complexity upper bounds and the available lower bounds for both tasks, closing which is an important future research problem. ACKNOWLEDGEMENTS This work is supported by Office of Naval Research N00014-22-1-2253. Dingwen Kong is partially supported by the elite undergraduate training program of School of Mathematical Sciences in Peking University. A FURTHER DETAILS ON RATIONALIZABILITY A.1 EQUIVALENCE OF NEVER-BEST-RESPONSE AND STRICT DOMINANCE It is known that for finite normal form games, rationalizable actions are given by iterated elimination of never-best-response actions, which is in fact equivalent to the iterative elimination of strictly dominated actions (Osborne & Rubinstein, 1994, Lemma 60.1). Here, for completeness, we include a proof that the iterative elimination of of actions that are never ∆-best-response gives the same definition as Definition 1. Notice that it suffices to show that for every subgame, the set of never ∆-best response actions and the set of ∆-dominated actions are the same. Proposition A.1. Suppose that an action a ∈ Ai is never a ∆-best response, i.e. ∀Π−i ∈ ∆( ∏ j ̸=iAi), ∃u ∈ ∆(Ai) such that ui (a,Π−i) ≤ ui (u,Π−1)−∆. Then a is also ∆-dominated, i.e. ∃u ∈ ∆(Ai), ∀Π−i ∈ ∆( ∏ j ̸=iAi) ui (a,Π−i) ≤ ui (u,Π−1)−∆. Proof. That a is never a ∆-best response is equivalent to min Π−1 max u {ui (a,Π−i)− ui (u,Π−1)} ≤ −∆. That a is ∆-dominated is equivalent to max u min Π−1 {ui (a,Π−i)− ui (u,Π−1)} ≤ −∆. Equivalence immediately follows from von Neumman’s minimax theorem. A.2 PROOF OF PROPOSITION 1 Proof. We prove this inductively with the following hypothesis: ∀l ≥ 1,∀i ∈ [N ], ∑ a∈Ai x∗i (a) · 1[a ∈ El] ≤ 2lϵ ∆ . Base case: By the definition of ϵ-NE, ∀i ∈ [N ], ∀x′ ∈ ∆(Ai), ui(x ∗ i , x ∗ −i) ≥ ui(x′, x∗−i)− ϵ. Note that if ã ∈ E1 ∩ Ai, ∃x ∈ ∆(Ai) such that ∀a−i, ui(ã, a−i) ≤ ui(x, a−i)−∆. Therefore if we choose x′ := x∗i − ∑ a∈Ai 1[a ∈ E1]x∗i (a)ea + ∑ a∈Ai 1[a ∈ E1]x∗i (a) · x(a), that is if we play the dominating strategy instead of the dominated action in x∗i , then ui(x ′, x∗−i) ≥ ui(x∗i , x∗−i) + ∑ a∈Ai x∗i (a) · 1[a ∈ E1]∆. It follows that ∑ a∈Ai x∗i (a) · 1[a ∈ E1] ≤ ϵ ∆ . Induction step: By the induction hypothesis, ∀i ∈ [N ],∑ a∈Ai x∗i (a) · 1[a ∈ El] ≤ 2lϵ ∆ . Now consider x̃i := x∗i − ∑ a∈Ai 1[a ∈ El] · x ∗ i (a)ea 1− ∑ a∈Ai 1[a ∈ El] · x ∗ i (a) , (∀i ∈ [N ]) which is supported on actions on in El. The induction hypothesis implies ∥x̃i − x∗i ∥1 ≤ 6lϵ/∆. Therefore ∀i ∈ [N ], ∀a ∈ Ai, ∣∣ui(a, x̃−i)− ui(a, x∗−i)∣∣ ≤ 6Nlϵ∆ . Now if ã ∈ (El+1 \ El) ∩ Ai, since x̃−i is not supported on El, ∃x ∈ ∆(Ai) such that ui(ã, x̃−i) ≤ ui(x, x̃−i)−∆. It follows that ui(ã, x ∗ −i) ≤ ui(x, x∗−i)−∆+ 12Nlϵ ∆ ≤ ui(x, x∗−i)− ∆ 2 . Using the same arguments as in the base case,∑ a∈Ai x∗i (a) · 1[a ∈ El+1 \ El] ≤ ϵ ∆− 12Nlϵ∆ ≤ 2ϵ ∆ . It follows that ∀i ∈ [N ], ∑ a∈Ai x∗i (a) · 1[a ∈ El+1] ≤ 2(l + 1)ϵ ∆ . The statement is thus proved via induction on l. B FIND ONE RATIONALIZABLE ACTION PROFILE B.1 PROOF OF PROPOSITION 2 Proof. Consider the following N -player game denoted by G0 with action set [A]: ui (·) = 0 (1 ≤ i ≤ N − 1) uN (aN ) = ∆ · 1[aN > 1]. Specifically, a payoff with mean u is realized by a skewed Rademacher random variable with 1+u2 probability on +1 and 1−u2 on −1. In game G0, clearly for player N , the action 1 is ∆-dominated. However, consider the following game, denoted by Ga∗ (where a∗ ∈ [A]N−1) ui (·) = 0, (1 ≤ i ≤ N − 1) uN (aN ) = ∆, (aN > 1) uN (1, a−N ) = 2∆ · 1[a−N = a∗]. It can be seen that in game Ga∗ , for player N , the action 1 is not dominated or iteratively strictly dominated. Therefore, suppose that an algorithm O is able to determine whether an action is rationalizable (i.e. not iteratively strictly dominated) with 0.9 accuracy, then its output needs to be False with at least 0.9 probability in game G0, but True with at least 0.9 probability in game Ga∗ . By Pinsker’s inequality, KL(O(G0)||O(Ga∗)) ≥ 2 · 0.82 > 1, where we used O(G) to denote the trajectory generated by running algorithm O on game G. Meanwhile, notice that G0 and Ga∗ is different only when the first N − 1 players play a∗. Denote the number of times where the first N − 1 players play a∗ by n(a∗). Using the chain rule of KL-divergence, KL(O(G0)||O(Ga∗)) ≤ EG0 [n(a∗)] ·KL ( Ber ( 1 2 )∥∥∥∥Ber(1 + 2∆2 )) (a) ≤ EG0 [n(a∗)] · 1 1−2∆ 2 · (2∆)2 (b) ≤ 10∆2EG0 [n(a∗)] . Here (a) follows from reverse Pinsker’s inequality (see e.g. Binette (2019)), while (b) uses the fact that ∆ < 0.1. This means that for any a∗ ∈ [A]N−1, EG0 [n(a∗)] ≥ 1 10∆2 . It follows that the expected number of samples when running O on G0 is at least EG0 ∑ a∗∈[A]N−1 n(a∗) ≥ AN−1 10∆2 . B.2 PROOF OF THEOREM 3 Proof. We first present the concentration bound. For l ∈ [L], i ∈ [N ], and a ∈ Ai, by Hoeffding’s inequality we have that with probability at least 1− δLNA ,∣∣∣ui(a, a(l−1)−i )− ûi(a, a(l−1)−i )∣∣∣ ≤ √ 4 ln(ANL/δ) M ≤ ∆ 4 . Therefore by a union bound we have that with probability at least 1− δ, for all l ∈ [L], i ∈ [N ], and a ∈ Ai, ∣∣∣ui(a, a(l−1)−i )− ûi(a, a(l−1)−i )∣∣∣ ≤ ∆4 . We condition on this event for the rest of the proof. We use induction on l to prove that for all l ∈ [L] ∪ {0}, (a(l)1 , · · · , a (l) N ) can survive at least l rounds of IDE. The base case for l = 0 directly holds. Now we assume that the case for 1, 2, . . . , l− 1 holds and consider the case of l. For any i ∈ [N ], we show that a(l)i can survive at least l rounds of IDE. Recall that a (l) i is the empirical best response, i.e. a (l) i = argmax a∈Ai ûi(a, a (l−1) −i ). For any mixed strategy xi ∈ ∆(Ai), we have that ui(a (l) i , a (l−1) −i )− ui(xi, a (l−1) −i ) ≥ûi(a(l)i , a (l−1) −i )− ûi(xi, a (l−1) −i )− ∣∣∣ui(a(l)i , a(l−1)−i )− ûi(a(l)i , a(l−1)−i )∣∣∣− ∣∣∣ui(xi, a(l−1)−i )− ûi(xi, a(l−1)−i )∣∣∣ ≥0− ∆ 4 − ∆ 4 = −∆ 2 . Since actions in a(l−1)−i can survive at least l− 1 rounds of ∆-IDE, a (l) i cannot be ∆-dominated by xi in rounds 1, · · · , l. Since xi can be arbitrarily chosen, a(l)i can survive at least l rounds of ∆-IDE. We can now ensure that the output (a(L)1 , · · · , a (L) N ) survives L rounds of ∆-IDE, which is equivalent to ∆-rationalizability (see Definition 1). The total number of samples used is LNA ·M = Õ ( LNA ∆2 ) . B.3 PROOF OF THEOREM 4 Proof. Without loss of generality, assume that ∆ < 0.1. Consider the following instance where A1 = · · · = AN = [A]: ui(ai) = ∆ · 1[ai = 1], (i ̸= j) uj(aj , a−j) = { ∆ · 1[aj = 1] (a−j ̸= {1}N−1) ∆ · 1[aj = 1] + 2∆ · 1[aj = a] (a−j = {1}N−1) . Denote this instance by Gj,a. Additionally, define the following instance G0: ui(ai) = ∆ · 1[ai = 1]. (∀i ∈ [N ]) As before, a payoff with expectation u is realized as a random variable with distribution 2Ber( 1+u2 )−1. It can be seen that the only difference between G0 and Gj,a lies in uj(a, {1}N−1). By the KLdivergence chain rule, for any algorithm O, KL (O(G0)∥O(Gj,a)) ≤ 10∆2 · EG0 [ n(aj = a, a−j = {1}N−1) ] , where n(aj = a, a−j = {1}N−1) denotes the number of times the action profile (a, 1N−1) is played. Note that in G0, the only action profile surviving two rounds of ∆-IDE is (1, · · · , 1), while in Gj,a, the only rationalizable action profile is (1, · · · , 1︸ ︷︷ ︸ j−1 , a, 1, · · · , 1). To guarantee 0.9 accuracy, by Pinsker’s inequality, KL (O(G0)||O(Gj,a)) ≥ 1 2 |O(G0)−O(Gj,a)|2 > 1. It follows that ∀j ∈ [N ], a > 1, EG0 [ n(aj = a, a−j = {1}N−1) ] ≥ 1 10∆2 . Thus the total expected sample complexity is at least∑ a>1,j∈[N ] EG0 [ n(aj = a, a−j = {1}N−1) ] ≥ N(A− 1) 10∆2 . C OMITTED PROOFS IN SECTION 4 We start our analysis by bounding the sampling noise. For player i ∈ [N ], action ai ∈ Ai, and τ ∈ [T ], we denote the sampling noise as ξ (τ) i (ai) := u (τ) i (ai)− ui(ai, θ (τ) −i ). We have the following lemma. Lemma C.1. Let Ω1 denote the event that for all t ∈ [T ], i ∈ [N ], and ai ∈ Ai,∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai) ∣∣∣∣∣ ≤ 2 √√√√ln(ANT/δ) t∑ τ=1 1 Mτ . Then Pr[Ω1] ≥ 1− δ. Proof. Note that ∑t τ=1 ξ (τ) i (ai) can be written as the sum of ∑t τ=1 Mτ mean-zero bounded terms. By Azuma-Hoeffding inequality, with probability at least 1 − δANT , for a fixed i ∈ [N ], t ∈ [T ], ai ∈ Ai, ∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai) ∣∣∣∣∣ ≤ 2 √√√√ln(ANT/δ) t∑ τ=1 Mτ · ( 1 Mτ )2 . (6) A union bound over i ∈ [N ], t ∈ [T ], ai ∈ Ai proves the statement. Lemma C.2. With probability at least 1− 2δ, for all t ∈ [T ] and all i ∈ [N ], ai ∈ Ai ∩ EL, θ (t) i (ai) ≤ p. Proof. We condition on the event Ω1 defined in Lemma C.1 and the success of Algorithm 1. We prove the claim by induction in t. The base case for t = 1 holds directly by initialization. Now we assume the case for 1, 2, . . . , t holds and consider the case of t+ 1. Consider a fixed player i ∈ [N ] and iteratively dominated action ai ∈ Ai ∩EL. By definition there exists a mixed strategy xi such that for all a−i ∩ EL = ∅, ui(xi, a−i) ≥ ui(ai, a−i) + ∆. Therefore for τ ∈ [t], by the induction hypothesis for τ , ui(xi, θ (τ) −i ) ≥ ui(ai, θ (τ) −i ) + (1−ANp) ·∆−ANp ≥ ui(ai, θ(τ)−i ) + ∆/2. (7) Consequently, t∑ τ=1 (u (τ) i (xi)− u (τ) i (ai)) ≥ t∑ τ=1 (ui(xi, θ (τ) −i )− ui(ai, θ (τ) −i ))− 4 · √√√√ln(ANT/δ) t∑ τ=1 1 Mτ (By (6)) ≥ t∆ 2 − 4 · √√√√ln(ANT/δ) t∑ τ=1 1 Mτ (By (7)) ≥ t∆ 4 . Therefore by our choice of learning rate, θ (t+1) i (ai) ≤ exp ( −ηt · t∑ τ=1 ( u (τ) i (xi)− u (τ) i (ai) )) ≤ exp ( −4 ln(1/p) ∆t · ∆t 4 ) = p. Therefore θ (t+1) i (ai) ≤ p as desired. Now we turn to the ϵ-CCE guarantee. For a player i ∈ [N ], recall that the regret is defined as RegretiT = max θ∈∆(Ai) T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩. Lemma C.3. The regret can be bounded as RegretiT ≤ O (√ lnA · T + ln(1/p) lnT ∆ ) . Proof. Note that apart from the choice of θ(1), we are exactly running FTRL with learning rates ηt = max {√ lnA/t, 4 ln(1/p) ∆t } , which are monotonically decreasing. Therefore following the standard analysis of FTRL (see, e.g., Orabona (2019, Corollary 7.9)), we have max θ∈∆(Ai) T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ≤ 2 + lnA ηT + 1 2 T∑ t=1 ηt ≤ 2 + √ lnA · T + 1 2 T∑ t=1 (√ lnA t + 4 ln(1/p) ∆t ) = O (√ lnA · T + ln(1/p) lnT ∆ ) . However, this form of regret cannot directly imply approximate CCE. We define the following expected version regret Regreti,⋆T = max θ∈∆(Ai) T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩. The next lemma bound the difference between these two types of regret Lemma C.4. The following event Ω2 holds with probability at least 1− δ: for all i ∈ [N ]∣∣∣Regreti,⋆T − RegretiT ∣∣∣ ≤ O (√T · ln(NA/δ)) . Proof. We denote Θi := {e1, e2, . . . , e|Ai|} Therefore we have∣∣∣Regreti,⋆T − RegretiT ∣∣∣ = ∣∣∣∣∣ maxθ∈∆(Ai) T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩ − max θ∈∆(Ai) T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ∣∣∣∣∣ = ∣∣∣∣∣maxθ∈Θi T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩ −max θ∈Θi T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ∣∣∣∣∣ =max θ∈Θi ∣∣∣∣∣ T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩ − T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ∣∣∣∣∣ =max θ∈Θi ∣∣∣∣∣ T∑ t=1 ⟨ui(·, θ(t)−i)− u (t) i , θ − θ (t) i ⟩ ∣∣∣∣∣ Note that ⟨ui(·, θ(t)−i) − u (t) i , θ − θ (t) i ⟩ is a bounded martingale difference sequence. By AzumaHoeffding’s inequality, for a fixed θ ∈ Θi, with probability at least 1− δAN ,∣∣∣∣∣ T∑ t=1 ⟨ui(·, θ(t)−i)− u (t) i , θ − θ (t) i ⟩ ∣∣∣∣∣ ≤ O (√T · ln(NA/δ)) Thus we complete the proof by a union bound. Proof of Theorem 6. We condition on event Ω1 defined Lemma C.1, event Ω2 defined in Lemma C.4, and the success of Algorithm 1. Coarse Correlated Equilibria. By Lemma C.3 and Lemma C.4 we know that for all i ∈ [N ], Regreti,⋆T ≤ O (√ lnA · T + ln(1/p) lnT ∆ + √ T · ln(NA/δ) ) . Therefore choosing T = Θ ( ln(NA/δ) ϵ2 + ln2(NA/∆ϵδ) ∆ϵ ) will guarantee that Regreti,⋆T is at most ϵT/2 for all i ∈ [N ]. In this case the average strategy ( ∑T t=1⊗Ni=1θ (t) i )/T would be an (ϵ/2)-CCE. Finally, in the clipping step, ∥θ̄(t)i −θ (t) i ∥1 ≤ 2pA ≤ ϵ4N for all i ∈ [N ], t ∈ [T ]. Thus for all t ∈ [T ], we have ∥ ⊗ni=1 θ̄ (t) i −⊗ni=1θ (t) i ∥1 ≤ ϵ4 , which further implies∥∥∥∥∥( T∑ t=1 ⊗ni=1θ̄ (t) i )/T − ( T∑ t=1 ⊗ni=1θ (t) i )/T ∥∥∥∥∥ 1 ≤ ϵ 4 . Therefore the output strategy Π = ( ∑T t=1⊗Ni=1θ̄ (t) i )/T is an ϵ-CCE. Rationalizability. By Lemma C.2, if a ∈ EL ∩ Ai, θ(t)i (a) ≤ p for all t ∈ [T ]. It follows that θ̄ (t) i (a) = 0, i.e., the action would not be the support in the output strategy Π = ( ∑T t=1⊗Ni=1θ̄ (t) i )/T . Sample complexity. The total number of full-information queries is T∑ t=1 Mt ≤ T + T∑ t=1 64 ln(ANT/δ) ∆2t ≤ T + Õ ( 1 ∆2 ) = Õ ( 1 ∆2 + 1 ϵ2 ) . The total sample complexity for CCE learning would then be NA · T∑ t=1 Mt = Õ ( NA ϵ2 + NA ∆2 ) . Finally consider the cost of finding one IDE-surviving action profile (Õ ( LNA ∆2 ) ) and we get the claimed rate. D OMITTED PROOFS IN SECTION 5 Similar to the CCE case we first bound the sampling noise. For action ai ∈ Ai, and τ ∈ [T ], we denote the sampling noise as ξ (τ) i (ai) := u (τ) i (ai)− ui(ai, θ (τ) −i ). In the CE case, we are interested in the weighted sum of noise ∑t τ=1 ξ (τ) i (ai)θ (τ) i (bi), which is bounded in the following lemma. Lemma D.1. The following event Ω3 holds with probability at least 1− δ: for all t ∈ [T ], i ∈ [N ], and ai ∈ Ai, ∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai)θ (τ) i (bi) ∣∣∣∣∣ ≤ ∆4 t∑ τ=1 θ (τ) i (bi). Proof. Note that ∑t τ=1 ξ (τ) i (ai)θ (τ) i (bi) can be written as the sum of ∑t τ=1 M τ i mean-zero bounded terms. Precisely, there are Mτi terms bounded by θ (τ) i (bi) Mτi . By the Azuma-Hoeffding inequality, we have that with probability at least 1− δA2NT ,∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai)θ (τ) i (bi) ∣∣∣∣∣ ≤ 2 · √√√√ln(ANT/δ) t∑ τ=1 Mτi · ( θ (τ) i (bi) Mτi )2 = 2 · √√√√ln(ANT/δ) t∑ τ=1 (θ (τ) i (bi)) 2 Mτi ≤ ∆ 4 · √√√√ t∑ τ=1 θ (τ) i (bi) τ∑ j=1 θ (j) i (bi) ≤ ∆ 4 t∑ τ=1 θ (τ) i (bi) Therefore by a union bound we complete the proof. Lemma D.2. With probability at least 1− 2δ, for all t ∈ [T ], all i ∈ [N ], and all ai ∈ Ai ∩ EL, θ (t) i (ai) ≤ p Proof. We condition on the event Ω3 defined in Lemma D.1 and the success of Algorithm 1. We prove the claim by induction in t. The base case for t = 1 holds directly by initialization. Now we assume the case for 1, 2, . . . , t holds and consider the case of t+ 1. Consider a fixed player i ∈ [N ], an iteratively dominated action ai ∈ Ai ∩ EL, and an expert bi. By definition there exists a mixed strategy xi such that for all a−i ∩ EL = ∅, ui(xi, a−i) ≥ ui(ai, a−i) + ∆ Therefore for τ ∈ [t], by induction hypothesis we have ui(xi, θ (τ) −i ) ≥ ui(ai, θ (τ) −i ) + (1−ANp) ·∆−ANp ≥ ui(ai, θ(τ)−i ) + ∆/2 Thus we have t∑ τ=1 (u (τ) i (xi)− u (τ) i (ai)) · θ (τ) i (bi) ≥ t∑ τ=1 (ui(xi, θ (τ) −i )− ui(ai, θ (τ) −i )) · θ (τ) i (bi)− ∆ 4 t∑ τ=1 θ (τ) i (bi) ≥∆ 2 t∑ τ=1 θ (τ) i (bi)− ∆ 4 t∑ τ=1 θ (τ) i (bi) = ∆ 4 t∑ τ=1 θ (τ) i (bi) By our choice of learning rate, θ̂ (t+1) i (ai|bi) ≤ exp ( −ηbit,i · t∑ τ=1 θ (τ) i (bi) ( u (τ) i (xi)− u (τ) i (ai) )) ≤ exp ( − 4 ln(1/p) ∆ ∑t τ=1 θ (τ) i (b) · ∆ 4 t∑ i=1 θ (τ) i (b) ) = p. Therefore we conclude θ (t+1) i (ai) = ∑ bi∈Ai θ̂ (t+1) i (ai|bi)θ (t+1) i (bi) ≤ p Now we turn to the ϵ-CE guarantee. For a player i ∈ [N ], recall that the swap-regret is defined as SwapRegretiT := sup ϕ:Ai→Ai T∑ t=1 ∑ b∈Ai θ (t) i (b)u (t) i (ϕ(b))− T∑ t=1 〈 θ (t) i , u (t) i 〉 . Lemma D.3. For all i ∈ [N ], the swap-regret can be bounded as SwapRegretiT ≤ O (√ A ln(A)T + A ln(NAT/∆ϵ)2 ∆ ) . Proof. For i ∈ [N ], recall that the regret for an expert b ∈ Ai is defined as Regreti,bT := max a∈Ai T∑ t=1 θ (t) i (b)u (t) i (a)− T∑ t=1 〈 θ̂ (t) i (·|b), θ (t) i (b)u (t) i 〉 . Since θ(t)i (a) = ∑ b∈Ai θ̂ (t) i (a|b)θ (t) i (b) for all a and all t > 1,∑ b∈Ai Regreti,bT = ∑ b∈Ai max ab∈Ai T∑ t=1 θ (t) i (b)u (t) i (ab)− ∑ b∈Ai T∑ t=1 〈 θ̂ (t) i (·|b)θ (t) i (b), u (t) i 〉 = max ϕ:Ai→Ai ∑ b∈Ai T∑ t=1 θ (t) i (b)u (t) i (ϕ(b))− T∑ t=1 〈∑ b∈Ai θ̂ (t) i (·|b)θ (t) i (b), u (t) i 〉 ≥ max ϕ:Ai→Ai T∑ t=1 ∑ b∈Ai θ (t) i (b)u (t) i (ϕ(b))− T∑ t=2 〈 θ (t) i , u (t) i 〉 − 1 ≥ SwapRegretiT − 1. It now suffices to control the regret of each individual expert. For expert b, we are essentially running FTRL with learning rates ηbt,i := max { 4 ln(1/p) ∆ ∑t τ=1 θ (τ) i (b) , √ A lnA√ t } , which are clearly monotonically decreasing. Therefore using standard analysis of FTRL (see, e.g., Orabona (2019, Corollary 7.9)), Regreti,bT ≤ lnA ηbT,i + T∑ t=1 ηbt,i · θ (t) i (b) 2 ≤ √ T lnA A + T∑ t=1 θ (t) i (b) · √ A lnA t + 4 ln(1/p) ∆ · T∑ t=1 θ (t) i (b)∑t τ=1 θ (τ) i (b) ≤ √ T lnA A + T∑ t=1 θ (t) i (b) · √ A lnA t + 4 ln(1/p) ∆ ( 1 + ln ( T p )) . Here we used the fact that ∀b ∈ Ai, θ(1)i (b) ≥ p, and T∑ t=1 θ (t) i (b)∑τ i=1 θ (τ) i (b) ≤ 1 + ∫ ∑T t=1 θ (t) i (b) θ (1) i (b) ds s = 1 + ln (∑T t=1 θ (t) i (b) θ (1) i (b) ) ≤ 1 + ln ( T p ) . Notice that ∑ b∈Ai ∑T t=1 θ (t) i (b) · √ A lnA t ≤ O( √ A ln(A)T ). Therefore SwapRegretiT ≤ O(1) + ∑ b∈Ai Regreti,bT ≤ O (√ A ln(A)T + A ln(NAT/∆ϵ)2 ∆ ) . (8) Similar to the CCE case,, this form of regret can not directly imply approximate CE. We define the following expected version regret SwapRegreti,⋆T := sup ϕ:Ai→Ai T∑ t=1 〈 ϕ ◦ θ(t)i , ui(·, θ (t) −i) 〉 − T∑ t=1 〈 θ (t) i , ui(·, θ (t) −i) 〉 The next lemma bound the difference between these two types of regret Lemma D.4. The following event Ω4 has probability at least 1− δ: for all i ∈ [N ],∣∣∣SwapRegreti,⋆T − SwapRegretiT ∣∣∣ ≤ O (√ AT ln ( AN δ )) . Proof. Note that∣∣∣SwapRegreti,⋆T − SwapRegretiT ∣∣∣ = ∣∣∣∣∣ supϕ:Ai→Ai T∑ t=1 〈 ϕ ◦ θ(t)i − θ (t) i , ui(·, θ (t) −i) 〉 − sup ϕ:Ai→Ai T∑ t=1 〈 ϕ ◦ θ(t)i − θ (t) i , u (t) i 〉∣∣∣∣∣ ≤ sup ϕ:Ai→Ai ∣∣∣∣∣ T∑ t=1 〈 ϕ ◦ θ(t)i − θ (t) i , ui(·, θ (t) −i)− u (t) i 〉∣∣∣∣∣ . Notice that E[u(t)i ] = ui ( ·, θ(t)−i ) , and that u(t)i ∈ [−1, 1]A. Therefore, ∀ϕ : Ai → Ai, ξϕt := 〈 ϕ ◦ θ(t)i − θ (t) i , ui ( ·, θ(t)−i ) − u(t)i 〉 is a bounded martingale difference sequence. By AzumaHoeffding inequality, for a fixed ϕ : Ai → Ai, with probability 1− δ′,∣∣∣∣∣ T∑ t=1 ξϕt ∣∣∣∣∣ ≤ 2 √ 2T ln ( 2 δ′ ) . By setting δ′ = δ/(NAA), we get with probability 1− δ/N , ∀ϕ : Ai → Ai,∣∣∣∣∣ T∑ t=1 ξϕt ∣∣∣∣∣ ≤ 2 √ 2AT ln ( 2AN δ ) . Therefore we complete the proof by a union bound over i ∈ [N ]. Proof of Theorem 12. We condition on event Ω3 defined Lemma D.1, event Ω4 defined in Lemma D.4, and the success of Algorithm 1. Correlated Equilibrium. By Lemma D.3 and Lemma D.4 we know that for all i ∈ [N ], SwapRegreti,⋆T ≤ O (√ A ln(A)T + A ln(NAT/∆ϵ)2 ∆ + √ AT ln ( AN δ )) . Therefore choosing T = Θ ( A ln ( AN δ ) ϵ2 + A ln3 ( NA ∆ϵδ ) ∆ϵ ) will guarantee that SwapRegreti,⋆T is at most ϵT/2 for all i ∈ [N ]. In this case the average strategy ( ∑T t=1⊗Ni=1θ (t) i )/T would be an ϵ/2-CE. Finally, in the clipping step, ∥θ̄(t)i −θ (t) i ∥1 ≤ 2pA ≤ ϵ4N for all i ∈ [N ], t ∈ [T ]. Thus for all t ∈ [T ], we have ∥ ⊗ni=1 θ̄ (t) i −⊗ni=1θ (t) i ∥1 ≤ ϵ4 , which further implies∥∥∥∥∥( T∑ t=1 ⊗ni=1θ̄ (t) i )/T − ( T∑ t=1 ⊗ni=1θ (t) i )/T ∥∥∥∥∥ 1 ≤ ϵ 4 . Therefore the output strategy Π = ( ∑T t=1⊗Ni=1θ̄ (t) i )/T is an ϵ-CE. Rationalizability. By Lemma D.2, if a ∈ EL ∩ Ai, θ(t)i (a) ≤ p for all t ∈ [T ]. It follows that θ̄ (t) i (a) = 0, i.e., the action would not be the support in the output strategy Π = ( ∑ t⊗iθ̄ (t) i )/T . Sample complexity. The total number of queries is∑ i∈[N ] T∑ t=1 AM (t) i ≤ NAT + ∑ i∈[N ] ∑ b∈Ai T∑ t=1 16θ (t) i (b) ∆2 · ∑t τ=1 θ (τ) i (b) ≤ NAT + 16NA 2 ∆2 · ln(T/p) ≤ Õ ( NA2 ϵ2 + NA2 ∆2 ) , where we used the fact that T∑ t=1 θ (t) i (a)∑τ i=1 θ (τ) i (a) ≤ 1 + ln ( T p ) . Finally consider the cost of finding one IDE-surviving action profile (Õ ( LNA ∆2 ) ) and we get the claimed rate. E DETAILS FOR REDUCTION ALGORITHMS In this section, we present the details for the reduction based algorithm for finding rationalizable CE (Algorithm 5) and analysis of both Algorithm 4 and 5. E.1 RATIONALIZABLE CCE VIA REDUCTION We will choose ϵ′ = min{ϵ,∆}3 , M = ⌈ 4 ln(2NA/δ) ϵ′2 ⌉ . Lemma E.1. With probability 1−δ, throughout the execution of Algorithm 4, for every t and i ∈ [N ], a′i ∈ Ai, |ûi(a′i,Π−i)
1. What is the focus and contribution of the paper regarding multi-agent games? 2. What are the strengths of the proposed algorithms, particularly in terms of computational efficiency and sample complexity? 3. What are the weaknesses of the paper, especially regarding the lack of matching lower bounds? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper provides an algorithm for learning the rationalizable action profiles for multi-agent games with bandit feedback, based on which another algorithm for learning the approximate rationalizable CE and CCE. Both algorithms are computationally efficient and have been proven to improve significantly compared to prior work in terms of sample complexity. Strengths And Weaknesses Strength: The authors have proposed a set of algorithms that solves the problem of learning rationalizable equilibrium, which is an important and interesting task. The sample complexity of the proposed method for finding rationalizable action profiles improves significantly compared to the existing Exp3-DH algorithm, and a corresponding lower bound is provided, which shows the near optimality of the algorithm under L = O ( 1 ) . The paper offers a framework to handle the task of finding rationalizable equilibria by augmenting arbitrary existing algorithms that find the equilibria. Even though there is a slight sacrifice in sample complexity, such a framework can be useful when combined with equilibria-finding algorithms of different properties. Weakness: The matching lower bounds for the learning algorithms on CE and CCE are still missing. Clarity, Quality, Novelty And Reproducibility Clarity: I think the paper is well-written overall. The introduction of the related concepts is reader-friendly and the arguments are easy to follow.
ICLR
Title Learning Rationalizable Equilibria in Multiplayer Games Abstract A natural goal in multi-agent learning is to learn rationalizable behavior, where players learn to avoid any Iteratively Dominated Action (IDA). However, standard no-regret based equilibria-finding algorithms could take exponential samples to find such rationalizable strategies. In this paper, we first propose a simple yet sample-efficient algorithm for finding a rationalizable action profile in multi-player general-sum games under bandit feedback, which substantially improves over the results of Wu et al. (2021). We further develop algorithms with the first efficient guarantees for learning rationalizable Coarse Correlated Equilibria (CCE) and Correlated Equilibria (CE). Our algorithms incorporate several novel techniques to guarantee the elimination of IDA and no (swap-)regret simultaneously, including a correlated exploration scheme and adaptive learning rates, which may be of independent interest. We complement our results with a sample complexity lower bound showing the sharpness of our guarantees. 1 INTRODUCTION A common objective in multi-agent learning is to find various equilibria, such as Nash equilibria (NE), correlated equilibria (CE) and coarse correlated equilibria (CCE). Generally speaking, a player in equilibrium lacks incentive to deviate assuming conformity of other players to the same equilibrium. Equilibrium learning has been extensively studied in the literature of game theory and online learning, and no-regret based learners can provably learn approximate CE and CCE with both computational and statistical efficiency (Stoltz, 2005; Cesa-Bianchi & Lugosi, 2006). However, not all equilibria are created equal. As shown by Viossat & Zapechelnyuk (2013), a CCE can be entirely supported on dominated actions—actions that are worse off than some other strategy in all circumstances—which rational agents should apparently never play. Approximate CE also suffers from a similar problem. As shown by Wu et al. (2021, Theorem 1), there are examples where an ϵ-CE always plays iteratively dominated actions—actions that would be eliminated when iteratively deleting strictly dominated actions—unless ϵ is exponentially small. It is also shown that standard no-regret algorithms are indeed prone to finding such seemingly undesirable solutions (Wu et al., 2021). The intrinsic reason behind this is that CCE and approximate CE may not be rationalizable, and existing algorithms can indeed fail to find rationalizable solutions. Different from equilibria notions, rationalizability (Bernheim, 1984; Pearce, 1984) looks at the game from the perspective of a single player without knowledge of the actual strategies of other players, and only assumes common knowledge of their rationality. A rationalizable strategy will avoid strictly dominated actions, and assuming other players have also eliminated their dominated actions, iteratively avoid strictly dominated actions in the subgame. Rationalizability is a central solution concept in game theory (Osborne & Rubinstein, 1994) and has found applications in auctions (Battigalli & Siniscalchi, 2003) and mechanism design (Bergemann et al., 2011). If an (approximate) equilibrium only employs rationalizable actions, it would prevent irrational behavior such as playing dominated actions. Such equilibria are arguably more reasonable than ∗Equal contribution. unrationalizable ones, and constitute a stronger solution concept. This motivates us to consider the following open question: Can we efficiently learn equilibria that are also rationalizable? Despite its fundamental role in multi-agent reasoning, rationalizability is rarely studied from a learning perspective until recently, with Wu et al. (2021) giving the first algorithm for learning rationalizable strategies from bandit feedback. However, the problem of learning rationalizable CE and CCE remains a challenging open problem. Due to the existence of unrationalizable equilibria, running standard CE or CCE learners will not guarantee rationalizable solutions. On the other hand, one cannot hope to first identify all rationalizable actions and then find an equilibrium on the subgame, since even determining whether an action is rationalizable requires exponentially many samples (see Proposition 2). Therefore, achieving rationalizability and approximate equilibria simultaneously is nontrivial and presents new algorithmic challenges. In this work, we address the challenges above and give a positive answer to our main question. Our contributions can be summarized as follows: • As a first step, we provide a simple yet sample-efficient algorithm for identifying a ∆- rationalizable 1 action profile under bandit feedback, using only Õ ( LNA ∆2 ) 2 samples in normal- form games with N players, A actions per player and a minimum elimination length of L. This greatly improves the result of Wu et al. (2021) and is tight up to logarithmic factors when L = O(1). • Using the above algorithm as a subroutine, we develop exponential weights based algorithms that can provably find ∆-rationalizable ϵ-CCE using Õ ( LNA ∆2 + NA ϵ2 ) samples, and ∆-rationalizable ϵ-CE using Õ ( LNA ∆2 + NA2 min{ϵ2,∆2} ) samples. To the best of our knowledge, these are the first guarantees for learning rationalizable approximate CCE and CE. • We also provide reduction schemes that find ∆-rationalizable ϵ-CCE/CE using black-box algorithms for ϵ-CCE/CE. Despite having slightly worse rates, these algorithms can directly leverage the progress in equilibria finding, which may be of independent interest. 1.1 RELATED WORK Rationalizability and iterative dominance elimination. Rationalizability (Bernheim, 1984; Pearce, 1984) is a notion that captures rational reasoning in games and relaxes Nash Equilibrium. Rationalizability is closely related to the iterative elimination of dominated actions, which has been a focus of game theory research since the 1950s (Luce & Raiffa, 1957). It can be shown that an action is rationalizable if and only if it survives iterative elimination of strictly dominated actions3 (Pearce, 1984). There is also experimental evidence supporting iterative elimination of dominated strategies as a model of human reasoning (Camerer, 2011). Equilibria learning in games. There is a rich literature on applying online learning algorithms to learning equilibria in games. It is well-known that if all agents have no-regret, the resulting empirical average would be an ϵ-CCE (Young, 2004), while if all agents have no swap-regret, the resulting empirical average would be an ϵ-CE (Hart & Mas-Colell, 2000; Cesa-Bianchi & Lugosi, 2006). Later work continuing this line of research include those with faster convergence rates (Syrgkanis et al., 2015; Chen & Peng, 2020; Daskalakis et al., 2021), last-iterate convergence guarantees (Daskalakis & Panageas, 2018; Wei et al., 2020), and extension to extensive-form games (Celli et al., 2020; Bai et al., 2022b;a; Song et al., 2022) and Markov games (Song et al., 2021; Jin et al., 2021). Computational and learning aspect of rationalizability. Despite its conceptual importance, rationalizability and iterative dominance elimination are not well studied from a computational or learning perspective. For iterative strict dominance elimination in two-player games, Knuth et al. (1988) provided a cubic-time algorithm and proved that the problem is P-complete. The weak dominance version of the problem is proven to be NP-complete by Conitzer & Sandholm (2005). 1An action is ∆-rationalizable if it survives iterative elimination of ∆-dominated actions; c.f. Definition 1. 2Throughout this paper, we use Õ to suppress logarithmic factors in N , A, L, 1 ∆ , 1 δ , and 1 ϵ . 3For this equivalence to hold, we need to allow dominance by mixed strategies, and correlated beliefs when there are more than two players. These conditions are met in the setting of this work. Hofbauer & Weibull (1996) showed that in a class of learning dynamics which includes replicator dynamics — the continuous-time variant of Follow-The-Regularzied-Leader (FTRL), all iteratively strictly dominated actions vanish over time, while Mertikopoulos & Moustakas (2010) proved similar results for stochastic replicator dynamics; however, neither work provides finite-time guarantees. Cohen et al. (2017) proved that Hedge eliminates dominated actions in finite time, but did not extend their results to the more challenging case of iteratively dominated actions. The most related work in literature is the work on learning rationalizable actions by Wu et al. (2021), who proposed the Exp3-DH algorithm to find a strategy mostly supported on rationalizable actions with a polynomial rate. Our Algorithm 1 accomplishes the same task with a faster rate, while our Algorithms 2 & 3 deal with the more challenging problems of finding ϵ-CE/CCE that are also rationalizable. Although Exp3-DH is based on a no-regret algorithm, it does not enjoy regret or weighted regret guarantees and thus does not provably find rationalizable equilibria. 2 PRELIMINARY An N -player normal-form game involves N players whose action space are denoted by A = A1× · · ·×AN , and is defined by utility functions u1, · · · , uN :A → [0, 1]. Let A = maxi∈[N ] |Ai| denote the maximum number of actions per player, xi denote a mixed strategy of the i-th player (i.e., a distribution over Ai) and x−i denote a (correlated) mixed strategy of the other players (i.e., a distribution over ∏ j ̸=iAj). We further denote ui(xi, x−i) := Eai∼xi,a−i∼x−iui(ai, a−i). We use ∆(S) to denote a distribution over the set S. Learning from bandit feedback We consider the bandit feedback setting where in each round, each player i ∈ [N ] chooses an action ai ∈ Ai, and then observes a random feedback Ui ∈ [0, 1] such that E[Ui|a1, a2, · · · , an] = ui(a1, a2, · · · , an). 2.1 RATIONALIZABILITY An action a ∈ Ai is said to be rationalizable if it could be the best response to some (possibly correlated) belief of other players’ strategies, assuming that they are also rational. In other words, the set of rationalizable actions is obtained by iteratively removing actions that could never be a best response. For finite normal-form games, this is in fact equivalent to the iterative elimination of strictly dominated actions4 (Osborne & Rubinstein, 1994, Lemma 60.1). Definition 1 (∆-Rationalizability). 5 Define E1 := ⋃N i=1 {a ∈ Ai : ∃x ∈ ∆(Ai),∀a−i, ui(a, a−i) ≤ ui(x, a−i)−∆} , which is the set of ∆-dominated actions for all players. Further define El := ⋃N i=1 {a ∈ Ai : ∃x ∈ ∆(Ai),∀a−i s.t. a−i ∩ El−1 = ∅, ui(a, a−i) ≤ ui(x, a−i)−∆} , which is the set of actions that would be eliminated by the l-th round. Define L = inf{l : El+1 = El} as the minimum elimination length, and EL as the set of ∆-iteratively dominated actions (∆-IDAs). Actions in ∪ni=1Ai \ EL are said to be ∆-rationalizable. Notice that E1 ⊆ · · · ⊆ EL = EL+1. Here ∆ plays a similar role as the reward gap for best arm identification in stochastic multi-armed bandits. We will henceforth use ∆-rationalizability and survival of L rounds of iterative dominance elimination (IDE) interchangeably6. Since one cannot eliminate all the actions of a player, |EL| ≥ N , which further implies L ≤ N(A− 1) < NA. 2.2 EQUILIBRIA IN GAMES We consider three common learning objectives, namely Nash Equilibrium (NE), Correlated Equilibrium (CE) and Coarse Correlated Equilibrium (CCE). 4See, e.g., the Diamond-In-the-Rough (DIR) games in Wu et al. (2021, Definition 2) for a concrete example of iterative dominance elimination. 5Here we slightly abuse the notation and use ∆ to refer to both the gap and the probability simplex. 6Alternatively one can also define ∆-rationalizability by the iterative elimination of actions that are never ∆-best response, which is mathematically equivalent to Definition 1 (see Appendix A.1). Definition 2 (Nash Equilibrium). A strategy profile (x1, · · · , xN ) is an ϵ-Nash equilibrium if ui(xi, x−i) ≥ ui(a, x−i)− ϵ,∀a ∈ Ai,∀i ∈ [N ]. Definition 3 (Correlated Equilibrium). A correlated strategy Π ∈ ∆(A) is an ϵ-correlated equilibrium if ∀i ∈ [N ],∀ϕ : Ai → Ai,∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(ai, a−i) ≥ ∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(ϕ(ai), a−i)− ϵ. Definition 4 (Coarse Correlated Equilibrium). A correlated strategy Π ∈ ∆(A) is an ϵ-CCE if ∀i ∈ [N ],∀a′ ∈ Ai,∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(ai, a−i) ≥ ∑ ai∈Ai,a−i∈A−i Π(ai, a−i)ui(a ′, a−i)− ϵ. When ϵ = 0, the above definitions give exact Nash equilibrium, correlated equilibrium, and coarse correlated equilibrium, respectively. It is well known that ϵ-NE are ϵ-CE, and ϵ-CE are ϵ-CCE. Furthermore, we call an ϵ-CCE/CE that only plays ∆-rationalizable actions a.s. a ∆-rationalizable ϵ-CCE/CE. 2.3 CONNECTION BETWEEN EQUILIBRIA AND RATIONALIZABILITY It is known that all actions in the support of an exact CE are rationalizable (Osborne & Rubinstein, 1994, Lemma 56.2). However, one can easily construct an exact CCE that is supported on dominated (hence, unrationalizable) actions (see e.g. Viossat & Zapechelnyuk (2013, Fig. 3)). One might be tempted to suggest that running a CE solver immediately finds a CE (and hence CCE) that is also rationalizable. However, the connection between CE and rationalizability becomes quite different when it comes to approximate equilibria, which are inevitable in the presence of noise. As shown by Wu et al. (2021, Theorem 1), an ϵ-CE can be entirely supported on iteratively dominated action, unless ϵ = O(2−A). In other words, rationalizability is not guaranteed by running an approximate CE solver unless with an extremely high accuracy. Therefore, finding ϵ-CE and CCE that are simultaneously rationalizable remains a challenging open problem. Since NE is a subset of CE, all actions in the support of an (exact) NE would also be rationalizable. Unlike approximate CE, for ϵ < poly(∆, 1/N, 1/A)), one can show that any ϵ-Nash equilibrium is still mostly supported on rationalizable actions. Proposition 1. If x∗ = (x∗1, · · · , x∗N ) is an ϵ-Nash with ϵ < ∆ 2 24N2A , ∀i, Pra∼x∗i [a ∈ EL] ≤ 2Lϵ ∆ . Therefore, for two-player zero-sum games, it is possible to run an approximate NE solver and automatically find a rationalizable ϵ-NE. However, this method will induce a rather slow rate7, and we will provide a much more efficient algorithm for finding rationalizable ϵ-NE in Section 4. 3 LEARNING RATIONALIZABLE ACTION PROFILES In order to learn a rationalizable CE/CCE, one might suggest identifying the set of all rationalizable actions, and then learn CE or CCE on this subgame. Unfortunately, as shown by Proposition 2, even the simpler problem of deciding whether one single action is rationalizable is statistically hard. Proposition 2. For ∆ < 0.1, any algorithm that correctly decides whether an action is ∆- rationalizable with 0.9 probability needs Ω(AN−1∆−2) samples. This negative result motivates us to consider an easier task: can we at least find one rationalizable action profile sample-efficiently? Formally, we say a action profile (a1, . . . , aN ) is rationalizable if for all i ∈ [N ], ai is a rationalizable action. This is arguably one of the most fundamental tasks regarding rationalizability. For mixed-strategy dominance solvable games (Alon et al., 2021), the unique rationalizable action profile will be the unique NE and also the unique CE of the game. Therefore this easier task per se is still of practical importance. In this section we answer this question in the affirmative. We provide a sample-efficient algorithm which finds a rationalizable action profile using only Õ ( LNA ∆2 ) samples. This algorithm will also serve as an important subroutine for algorithms finding rationalizable CCE/CE in the later sections. 7For two-player zero-sum games, the marginals of any CCE is an NE so NE can be found efficiently. This is not true for general games, where finding NE is computationally hard and takes Ω(2N ) samples. Algorithm 1 Iterative Best Response 1: Initialization: choose a(0)i ∈ Ai arbitrarily for all i ∈ [N ] 2: for l = 1, · · · , L do 3: for i ∈ [N ] do 4: For all a ∈ Ai, play (a, a(l−1)−i ) for M times, compute player i’s average payoff ûi(a, a (l−1) −i ) 5: Set a(l)i ← argmaxa∈Ai ûi(a, a (l−1) −i ) // Computing the empirical best response 6: return (a(L)1 , · · · , a (L) N ) The intuition behind this algorithm is simple: if an action profile a−i can survive l rounds of IDE, then its best response ai (i.e., argmaxa∈Ai ui(a, a−i)) can survive at least l+1 rounds of IDE, since the action ai can only be eliminated after some actions in a−i are eliminated. Concretely, we start from an arbitrary action profile (a(0)1 , . . . , a (0) N ). In each round l ∈ [L], we compute the (empirical) best response of a(l−1)−i for each i ∈ [N ], and use those best responses to construct a new action profile (a(l)1 , . . . , a (l) N ). By constructing iterative best responses, we will end up with an action profile that can survive L rounds of IDE, which means surviving any number of rounds of IDE according to the definition of L. The full algorithm is presented in Algorithm 1, for which we have the following theoretical guarantee. Theorem 3. With M = ⌈ 16 ln(LNA/δ) ∆2 ⌉ , with probability 1− δ, Algorithm 1 returns an action profile that is ∆-rationalizable using a total of Õ ( LNA ∆2 ) samples. Wu et al. (2021) provide the first polynomial sample complexity results for finding rationalizable action profiles. They prove that the Exp3-DH algorithm is able to find a distribution with 1 − ζ fraction supported on ∆-rationalizable actions using Õ ( L1.5N3A1.5 ζ3∆3 ) samples under bandit feedback8. Compared to their result, our sample complexity bound Õ ( LNA ∆2 ) has more favorable dependence on all problem parameters, and our algorithm will output a distribution that is fully supported on rationalizable actions (thus has no dependence on ζ). We further complement Theorem 3 with a sample complexity lower bound showing that the linear dependency on N and A are optimal. This lower bound suggests that the Õ ( LNA ∆2 ) upper bound is tight up to logarithmic factors when L = O(1), and we conjecture that this is true for general L. Theorem 4. Even for games with L ≤ 2, any algorithm that returns a ∆-rationalizable action profile with 0.9 probability needs Ω ( NA ∆2 ) samples. Conjecture 5. The minimax optimal sample complexity for finding a ∆-rationalizable action profile is Θ ( LNA ∆2 ) for games with minimum elimination length L. 4 LEARNING RATIONALIZABLE COARSE CORRELATED EQUILIBRIA (CCE) In this section we introduce our algorithm for efficiently learning rationalizable CCEs. The high-level idea is to run no-regret Hedge-style algorithms for every player, while constraining the strategy inside the rationalizable region. Our algorithm is motivated by the fact that the probability of playing a dominated action will decay exponentially over time in the Hedge algorithm for adversarial bandit under full information feedback (Cohen et al., 2017). The full algorithm description is provided in Algorithm 2, and here we explain several key components in our algorithm design. Correlated Exploration Scheme. In the bandit feedback setting, standard exponential weights algorithms such as EXP3.IX require importance sampling and biased estimators to derive a highprobability regret bound (Neu, 2015). However, such bias could cause a dominating strategy to lose its advantage. In our algorithm we adopt a correlated exploration scheme, which essentially simulates full information feedback by bandit feedback using NA samples. Specifically, at every time step t, 8Wu et al. (2021)’s result allows trade-off between variables via different choice of algorithmic parameters. However, a ζ−1∆−3 factor is unavoidable regardless of choice of parameters. Algorithm 2 Hedge for Rationalizable ϵ-CCE 1: (a⋆1, · · · , a⋆N )← Algorithm 1 2: For all i ∈ [N ], initialize θ(1)i (·)← 1[· = a⋆i ] 3: for t = 1, · · · , T do 4: for i = 1, · · · , N do 5: For all a ∈ Ai, play (a, θ(t)−i) for Mt times, compute player i’s average payoff u (t) i (a) 6: Set θ(t+1)i (·) ∝ exp ( ηt ∑t τ=1 u (τ) i (·) ) 7: For all t ∈ [T ] and i ∈ [N ], eliminate all actions in θ(t)i with probability smaller than p, then renormalize the vector to simplex as θ̄(t)i 8: output: (∑T t=1⊗ni=1θ̄ (t) i ) /T the players take turn to enumerate their action set, while the other players fix their strategies according to Hedge. For i ∈ [N ] and t ≥ 2, we denote θ(t)i the strategy computed using Hedge for player i in round t. Joint strategy (a, θ(t)−i) is played to estimate player i’s payoff u (t) i (a). It is important to note that such correlated scheme does not require any communication between the players—the players can schedule the whole process before the game starts. Rationalizable Initialization and Variance Reduction. We use Algorithm 1, which learns a rationalizable action profile, to give the strategy for the first round. By carefully preserving the disadvantage of any iteratively dominated action, we keep the iterates inside the rationalizable region throughout the whole learning process. To ensure this for every iterate with high probability, a minibatch is used to reduce the variance of the estimator. Clipping. In the final step, we clip all actions with small probabilities, so that iteratively dominated actions do not appear in the output. The threshold is small enough to not affect the ϵ-CCE guarantee. 4.1 THEORETICAL GUARANTEE In Algorithm 2, we choose parameters in the following manner: ηt = max {√ lnA t , 4 ln(1/p) ∆t } ,Mt = ⌈ 64 ln(ANT/δ) ∆2t ⌉ , and p = min{ϵ,∆}8AN . (1) Note that our learning rate can be bigger than the standard learning rate in FTRL algorithms when t is small. The purpose is to guarantee the rationalizability of the iterates from the beginning of the learning process. As will be shown in the proof, this larger learning rate will not hurt the final rate. We now state the theoretical guarantee for Algorithm 2. Theorem 6. With parameters chosen as in Eq.(1) , after T = Õ ( 1 ϵ2 + 1 ϵ∆ ) rounds, with probability 1− 3δ, the output strategy of Algorithm 2 is a ∆-rationalizable ϵ-CCE.The total sample complexity is Õ ( LNA ∆2 + NA ϵ2 ) . Remark 7. Due to our lower bound (Theorem 4), an Õ(NA∆2 ) term is unavoidable since learning a rationalizable action profile is an easier task than learning rationalizable CCE. Based on our Conjecture 5, the additional L dependency is also likely to be inevitable. On the other hand, learning an ϵ-CCE alone only requires Õ( Aϵ2 ) samples, where as in our bound we have a larger Õ( NA ϵ2 ) term. The extra N factor is a consequence of our correlated exploration scheme in which only one player explores at a time. Removing this N factor might require more sophisticated exploration methods and utility estimators, which we leave as future work. Remark 8. Evoking Algorithm 1 requires knowledge of L, which may not be available in practice. In that case, an estimate L′ may be used in its stead. If L′ ≥ L (for instance when L′ = NA), we can recover the current rationalizability guarantee, albeit with a larger sample complexity scaling with L′. If L′ < L, we can still guarantee that the output policy avoids actions in EL′ , which are, informally speaking, actions that would be eliminated with L′ levels of reasoning. 4.1.1 OVERVIEW OF THE ANALYSIS We give an overview of our analysis of Algorithm 2 below. The full proof is deferred to Appendix C. Step 1: Ensure rationalizability. We will first show that rationalizability is preserved at each iterate, i.e., actions in EL will be played with low probability across all iterates. Formally, Lemma 9. With probability at least 1− 2δ, for all t ∈ [T ] and all i ∈ [N ], ai ∈ Ai ∩ EL, we have θ (t) i (ai) ≤ p. Here p is defined in (1). Lemma 9 guarantees that, after the clipping in Line 7 of Algorithm 2, the output correlated strategy be ∆-rationalizable. We proceed to explain the main idea for proving Lemma 9. A key observation is that the set of rationalizable actions, ∪ni=1Ai \EL, is closed under best response—for the i-th player, as long as the other players continue to play actions in∪j ̸=iAj\EL, actions inAi∩EL will suffer from excess losses each round in an exponential weights style algorithm. Concretely, for any a−i ∈ ( ∏ j ̸=iAj) \ EL and any iteratively dominated action ai ∈ Ai ∩ EL, there always exists xi ∈ ∆(Ai) such that ui(xi, a−i) ≥ ui(ai, a−i) + ∆. With our choice of p in Eq. (1), if other players choose their actions from ∪j ̸=iAj\EL with probability 1− pAN , we can still guarantee an excess loss of Ω(∆). It follows that∑t τ=1 u (τ) i (xi)− ∑t τ=1 u (τ) i (ai) ≥ Ω(t∆)− Sampling Noise. However, this excess loss can be obscured by the noise from bandit feedback when t is small. Note that it is crucial that the statement of Lemma 9 holds for all t due to the inductive nature of the proof. As a solution, we use a minibatch of size Mt = Õ ( ⌈ 1∆2t ⌉ ) in the t-th round to reduce the variance of the payoff estimator u(t)i . The noise term can now be upper-bounded with Azuma-Hoeffding by Sampling Noise ≤ Õ (√∑t τ=1 1 Mt ) ≤ O(t∆), Combining this with our choice of the learning rate ηt gives ηt (∑t τ=1 u (τ) i (xi)− ∑t τ=1 u (τ) i (ai) ) ≫ 1. (2) By the update rule of the Hedge algorithm, this implies that θ(t+1)i (ai) ≤ p, which enables us to complete the proof of Lemma 9 via induction on t. Step 2: Combine with no-regret guarantees. Next, we prove that the output strategy is an ϵ-CCE. For a player i ∈ [N ], the regret is defined as RegretiT = maxθ∈∆(Ai) ∑T t=1⟨u (t) i , θ − θ (t) i ⟩. We can obtain the following regret bound by standard analysis of FTRL with changing learning rates. Lemma 10. For all i ∈ [N ], RegretiT ≤ Õ (√ T + 1∆ ) . Here the additive 1/∆ term is the result of our larger Õ(∆−1t−1) learning rate for small t. It follows from Lemma 10 that T = Õ ( 1 ϵ2 + 1 ∆ϵ ) suffices to guarantee that the correlated strategy 1 T (∑T t=1⊗ni=1θ (t) i ) is an (ϵ/2)-CCE. Since pNA = O(ϵ), the clipping step only minorly affects the CCE guarantee and the clipped strategy 1T (∑T t=1⊗ni=1θ̄ (t) i ) is an ϵ-CCE. 4.2 APPLICATION TO LEARNING RATIONALIZABLE NASH EQUILIBRIUM Algorithm 2 can also be applied to two-player zero-sum games to learn a rationalizable ϵ-NE efficiently. Note that in two-player zero-sum games, the marginal distribution of an ϵ-CCE is guaranteed to be a 2ϵ-Nash (see, e.g., Proposition 9 in Bai et al. (2020)). Hence direct application of Algorithm 2 to a zero-sum game gives the following sample complexity bound. Corollary 11. In a two-player zero-sum game, the sample complexity for finding a ∆-rationalizable ϵ-Nash with Algorithm 2 is Õ ( LA ∆2 + A ϵ2 ) . This result improves over a direct application of Proposition 1, which gives Õ ( A3 ∆4 + A ϵ2 ) sample complexity and produces an ϵ-Nash that could still take unrationalizable actions with positive probability. Algorithm 3 Adaptive Hedge for Rationalizable ϵ-CE 1: (a⋆1, · · · , a⋆N )← Algorithm 1 2: For all i ∈ [N ], initialize θ(1)i ← (1− |Ai|p)1[· = a⋆i ] + p1 3: for t = 1, 2, . . . , T do 4: for i = 1, 2, . . . , N do 5: For all a ∈ Ai, play (a, θ(t)−i) for M (t) i times, compute player i’s average payoff u (t) i (a) 6: For all b ∈ Ai, set θ̂(t+1)i (·|b) ∝ exp ( ηbt,i ∑t τ=1 u (τ) i (·)θ (τ) i (b) ) 7: Find θ(t+1)i ∈ ∆(Ai) such that θ (t+1) i (a) = ∑ b∈Ai θ̂ (t+1) i (a|b)θ (t+1) i (b) 8: For all t ∈ [T ] and i ∈ [N ], eliminate all actions in θ(t)i with probability smaller than p, then renormalize the vector to simplex as θ̄(t)i 9: output: (∑T t=1⊗ni=1θ̄ (t) i ) /T 5 LEARNING RATIONALIZABLE CORRELATED EQUILIBRIUM In order to extend our results on ϵ-CCE to ϵ-CE, a natural approach would be augmenting Algorithm 2 with the celebrated Blum-Mansour reduction (Blum & Mansour, 2007) from swap-regret to external regret. In this reduction, one maintains A instances of a no-regret algorithm {Alg1, · · · ,AlgA}. In iteration t, the player would stack the recommendations of the A algorithms as a matrix, denoted by θ̂(t) ∈ RA×A, and compute its eigenvector θ(t) as the randomized strategy in round t. After observing the actual payoff vector u(t), it will pass the weighted payoff vector θ(t)(a)u(t) to algorithm Alga for each a. In this section, we focus on a fixed player i, and omit the subscript i when it’s clear from the context. Applying this reduction to Algorithm 2 directly, however, would fail to preserve rationalizability since the weighted loss vector θ(t)(a)u(t) admit a smaller utility gap θ(t)(a)∆. Specifically, consider an action b dominated by a mixed strategy x. In the payoff estimate of instance a,∑t τ=1 θ (τ)(a) ( u(τ)(b)− u(τ)(x) ) ≳ ∆ ∑t τ=1 θ (τ)(a)− √∑t τ=1 1 M(τ) ≱ 0, (3) which means that we cannot guarantee the elimination of IDAs every round as in Eq (2). In Algorithm 3, we address this by making ∑t τ=1 θ (τ)(a) play the role as t, tracking the progress of each no-regret instance separately. In time step t, we will compute the average payoff vector u(t) based on M (t) samples; then as in the Blum-Mansour reduction, we will update the A instances of Hedge with weighted payoffs θ(t)(a)u(t) and will use the eigenvector of θ̂ as the strategy for the next round. The key detail here is our choice of parameters, which adapts to the past strategies {θ(τ)}tτ=1: M (t) i := ⌈ maxa 64θ (t) i (a) ∆2· ∑t τ=1 θ (τ) i (a) ⌉ , ηat,i := max { 2 ln(1/p) ∆ ∑t τ=1 θ (τ) i (a) , √ A lnA t } , p = min{ϵ,∆}8AN . (4) Compared to Eq (1), we are essentially replacing t with an adaptive ∑t τ=1 θ (τ)(a). We can now improve (3) to∑t τ=1 θ (τ)(a) ( u(τ)(b)− u(τ)(x) ) ≳ ∆ ∑t τ=1 θ (τ)(a)− √∑t τ=1 θ(τ)(a)2 M(τ) ≳ ∆ ∑t τ=1 θ (τ)(a). (5) This together with our choice of ηat allows us to ensure the rationalizability of every iterate. The full algorithm is presented in Algorithm 3. We proceed to our theoretical guarantee for Algorithm 3. The analysis framework is largely similar to that of Algorithm 2. Our choice of M (t)i is sufficient to ensure ∆-rationalizability via AzumaHoeffding inequality, while swap-regret analysis of the algorithm proves that the average (clipped) strategy is indeed an ϵ-CE. The full proof is deferred to Appendix D. Theorem 12. With parameters in Eq. (4), after T = Õ ( A ϵ2 + A ∆2 ) rounds, with probability 1− 3δ, the output strategy of Algorithm 3 is a ∆-rationalizable ϵ-CE . The total sample complexity is Õ ( LNA ∆2 + NA2 min{∆2,ϵ2} ) . Algorithm 4 Rationalizable ϵ-CCE via Black-box Reduction 1: (a⋆1, · · · , a⋆N )← Algorithm 1 2: For all i ∈ [N ], initialize A(1)i ← {a⋆i } 3: for t = 1, 2, . . . do 4: Find an ϵ′-CCE Π with black-box algorithm O in the sub-game Πi∈[N ]A (t) i 5: ∀i ∈ [N ], a′i ∈ Ai, evaluate ui(a′i,Π−i) for M times and compute average ûi(a′i,Π−i) 6: for i ∈ [N ] do 7: Let a′i ← argmaxa∈Ai ûi(a,Π−i) // Computing the empirical best response 8: A(t+1)i ← A (t) i ∪ {a′i} 9: if A(t)i = A (t+1) i for all i ∈ [N ] then 10: return Π Compared to Theorem 6, our second term has an additional A factor, which is quite reasonable considering that algorithms for learning ϵ-CE take Õ(A2ϵ−2) samples, also A-times larger than the ϵ-CCE rate. 6 REDUCTION-BASED ALGORITHMS While Algorithm 2 and 3 make use of one specific no-regret algorithm, namely Hedge (Exponential Weights), in this section, we show that arbitrary algorithms for finding CCE/CE can be augmented to find rationalizable CCE/CE. The sample complexity obtained via this reduction is comparable with those of Algorithm 2 and 3 when L = Θ(NA), but slightly worse when L≪ NA. Moreover, this black-box approach would enable us to derive algorithms for rationalizable equilibria with more desirable qualities, such as last-iterate convergence, when using equilibria-finding algorithms with these properties. Suppose that we are given a black-box algorithm O that finds ϵ-CCE in arbitrary games. We can then use this algorithm in the following “support expansion” manner. We start with a subgame of only rationalizable actions, which can be identified efficiently with Algorithm 1, and call O to find an ϵ-CCE Π for the subgame. Next, we check for each i ∈ [N ] if the best response to Π−i is contained in Ai. If not, this means that the subgame’s ϵ-CCE may not be an ϵ-CCE for the full game; in this case, the best response to Π−i would be a rationalizable action that we can safely include into the action set. On the other hand, if the best response falls in Ai for all i, we can conclude that Π is also an ϵ-CCE for the original game. The details are given by Algorithm 4, and our main theoretical guarantee is the following. Theorem 13. Algorithm 4 outputs a ∆-rationalizable ϵ-CCE with high probability, using at most NA calls to the black-box CCE algorithm and Õ ( N2A2 min{ϵ2,∆2} ) additional samples. Using similar algorithmic techniques, we can develop a reduction scheme for rationalizable ϵ-CE. The detailed description for this algorithm is deferred to Appendix E. Here we only state its main theoretical guarantee. Theorem 14. There exists an algorithm that outputs a ∆-rationalizable ϵ-CE with high probability, using at most NA calls to a black-box CE algorithm and Õ ( N2A3 min{ϵ2,∆2} ) additional samples. 7 CONCLUSION In this paper, we consider two tasks: (1) learning rationalizable action profiles; (2) learning rationalizable equilibria. For task 1, we propose a conceptually simple algorithm whose sample complexity is significantly better than prior work (Wu et al., 2021). For task 2, we develop the first provably efficient algorithms for learning ϵ-CE and ϵ-CCE that are also rationalizable. Our algorithms are computationally efficient, enjoy sample complexity that scales polynomially with the number of players and are able to avoid iteratively dominated actions completely. Our results rely on several new techniques which might be of independent interests to the community. There remains a gap between our sample complexity upper bounds and the available lower bounds for both tasks, closing which is an important future research problem. ACKNOWLEDGEMENTS This work is supported by Office of Naval Research N00014-22-1-2253. Dingwen Kong is partially supported by the elite undergraduate training program of School of Mathematical Sciences in Peking University. A FURTHER DETAILS ON RATIONALIZABILITY A.1 EQUIVALENCE OF NEVER-BEST-RESPONSE AND STRICT DOMINANCE It is known that for finite normal form games, rationalizable actions are given by iterated elimination of never-best-response actions, which is in fact equivalent to the iterative elimination of strictly dominated actions (Osborne & Rubinstein, 1994, Lemma 60.1). Here, for completeness, we include a proof that the iterative elimination of of actions that are never ∆-best-response gives the same definition as Definition 1. Notice that it suffices to show that for every subgame, the set of never ∆-best response actions and the set of ∆-dominated actions are the same. Proposition A.1. Suppose that an action a ∈ Ai is never a ∆-best response, i.e. ∀Π−i ∈ ∆( ∏ j ̸=iAi), ∃u ∈ ∆(Ai) such that ui (a,Π−i) ≤ ui (u,Π−1)−∆. Then a is also ∆-dominated, i.e. ∃u ∈ ∆(Ai), ∀Π−i ∈ ∆( ∏ j ̸=iAi) ui (a,Π−i) ≤ ui (u,Π−1)−∆. Proof. That a is never a ∆-best response is equivalent to min Π−1 max u {ui (a,Π−i)− ui (u,Π−1)} ≤ −∆. That a is ∆-dominated is equivalent to max u min Π−1 {ui (a,Π−i)− ui (u,Π−1)} ≤ −∆. Equivalence immediately follows from von Neumman’s minimax theorem. A.2 PROOF OF PROPOSITION 1 Proof. We prove this inductively with the following hypothesis: ∀l ≥ 1,∀i ∈ [N ], ∑ a∈Ai x∗i (a) · 1[a ∈ El] ≤ 2lϵ ∆ . Base case: By the definition of ϵ-NE, ∀i ∈ [N ], ∀x′ ∈ ∆(Ai), ui(x ∗ i , x ∗ −i) ≥ ui(x′, x∗−i)− ϵ. Note that if ã ∈ E1 ∩ Ai, ∃x ∈ ∆(Ai) such that ∀a−i, ui(ã, a−i) ≤ ui(x, a−i)−∆. Therefore if we choose x′ := x∗i − ∑ a∈Ai 1[a ∈ E1]x∗i (a)ea + ∑ a∈Ai 1[a ∈ E1]x∗i (a) · x(a), that is if we play the dominating strategy instead of the dominated action in x∗i , then ui(x ′, x∗−i) ≥ ui(x∗i , x∗−i) + ∑ a∈Ai x∗i (a) · 1[a ∈ E1]∆. It follows that ∑ a∈Ai x∗i (a) · 1[a ∈ E1] ≤ ϵ ∆ . Induction step: By the induction hypothesis, ∀i ∈ [N ],∑ a∈Ai x∗i (a) · 1[a ∈ El] ≤ 2lϵ ∆ . Now consider x̃i := x∗i − ∑ a∈Ai 1[a ∈ El] · x ∗ i (a)ea 1− ∑ a∈Ai 1[a ∈ El] · x ∗ i (a) , (∀i ∈ [N ]) which is supported on actions on in El. The induction hypothesis implies ∥x̃i − x∗i ∥1 ≤ 6lϵ/∆. Therefore ∀i ∈ [N ], ∀a ∈ Ai, ∣∣ui(a, x̃−i)− ui(a, x∗−i)∣∣ ≤ 6Nlϵ∆ . Now if ã ∈ (El+1 \ El) ∩ Ai, since x̃−i is not supported on El, ∃x ∈ ∆(Ai) such that ui(ã, x̃−i) ≤ ui(x, x̃−i)−∆. It follows that ui(ã, x ∗ −i) ≤ ui(x, x∗−i)−∆+ 12Nlϵ ∆ ≤ ui(x, x∗−i)− ∆ 2 . Using the same arguments as in the base case,∑ a∈Ai x∗i (a) · 1[a ∈ El+1 \ El] ≤ ϵ ∆− 12Nlϵ∆ ≤ 2ϵ ∆ . It follows that ∀i ∈ [N ], ∑ a∈Ai x∗i (a) · 1[a ∈ El+1] ≤ 2(l + 1)ϵ ∆ . The statement is thus proved via induction on l. B FIND ONE RATIONALIZABLE ACTION PROFILE B.1 PROOF OF PROPOSITION 2 Proof. Consider the following N -player game denoted by G0 with action set [A]: ui (·) = 0 (1 ≤ i ≤ N − 1) uN (aN ) = ∆ · 1[aN > 1]. Specifically, a payoff with mean u is realized by a skewed Rademacher random variable with 1+u2 probability on +1 and 1−u2 on −1. In game G0, clearly for player N , the action 1 is ∆-dominated. However, consider the following game, denoted by Ga∗ (where a∗ ∈ [A]N−1) ui (·) = 0, (1 ≤ i ≤ N − 1) uN (aN ) = ∆, (aN > 1) uN (1, a−N ) = 2∆ · 1[a−N = a∗]. It can be seen that in game Ga∗ , for player N , the action 1 is not dominated or iteratively strictly dominated. Therefore, suppose that an algorithm O is able to determine whether an action is rationalizable (i.e. not iteratively strictly dominated) with 0.9 accuracy, then its output needs to be False with at least 0.9 probability in game G0, but True with at least 0.9 probability in game Ga∗ . By Pinsker’s inequality, KL(O(G0)||O(Ga∗)) ≥ 2 · 0.82 > 1, where we used O(G) to denote the trajectory generated by running algorithm O on game G. Meanwhile, notice that G0 and Ga∗ is different only when the first N − 1 players play a∗. Denote the number of times where the first N − 1 players play a∗ by n(a∗). Using the chain rule of KL-divergence, KL(O(G0)||O(Ga∗)) ≤ EG0 [n(a∗)] ·KL ( Ber ( 1 2 )∥∥∥∥Ber(1 + 2∆2 )) (a) ≤ EG0 [n(a∗)] · 1 1−2∆ 2 · (2∆)2 (b) ≤ 10∆2EG0 [n(a∗)] . Here (a) follows from reverse Pinsker’s inequality (see e.g. Binette (2019)), while (b) uses the fact that ∆ < 0.1. This means that for any a∗ ∈ [A]N−1, EG0 [n(a∗)] ≥ 1 10∆2 . It follows that the expected number of samples when running O on G0 is at least EG0 ∑ a∗∈[A]N−1 n(a∗) ≥ AN−1 10∆2 . B.2 PROOF OF THEOREM 3 Proof. We first present the concentration bound. For l ∈ [L], i ∈ [N ], and a ∈ Ai, by Hoeffding’s inequality we have that with probability at least 1− δLNA ,∣∣∣ui(a, a(l−1)−i )− ûi(a, a(l−1)−i )∣∣∣ ≤ √ 4 ln(ANL/δ) M ≤ ∆ 4 . Therefore by a union bound we have that with probability at least 1− δ, for all l ∈ [L], i ∈ [N ], and a ∈ Ai, ∣∣∣ui(a, a(l−1)−i )− ûi(a, a(l−1)−i )∣∣∣ ≤ ∆4 . We condition on this event for the rest of the proof. We use induction on l to prove that for all l ∈ [L] ∪ {0}, (a(l)1 , · · · , a (l) N ) can survive at least l rounds of IDE. The base case for l = 0 directly holds. Now we assume that the case for 1, 2, . . . , l− 1 holds and consider the case of l. For any i ∈ [N ], we show that a(l)i can survive at least l rounds of IDE. Recall that a (l) i is the empirical best response, i.e. a (l) i = argmax a∈Ai ûi(a, a (l−1) −i ). For any mixed strategy xi ∈ ∆(Ai), we have that ui(a (l) i , a (l−1) −i )− ui(xi, a (l−1) −i ) ≥ûi(a(l)i , a (l−1) −i )− ûi(xi, a (l−1) −i )− ∣∣∣ui(a(l)i , a(l−1)−i )− ûi(a(l)i , a(l−1)−i )∣∣∣− ∣∣∣ui(xi, a(l−1)−i )− ûi(xi, a(l−1)−i )∣∣∣ ≥0− ∆ 4 − ∆ 4 = −∆ 2 . Since actions in a(l−1)−i can survive at least l− 1 rounds of ∆-IDE, a (l) i cannot be ∆-dominated by xi in rounds 1, · · · , l. Since xi can be arbitrarily chosen, a(l)i can survive at least l rounds of ∆-IDE. We can now ensure that the output (a(L)1 , · · · , a (L) N ) survives L rounds of ∆-IDE, which is equivalent to ∆-rationalizability (see Definition 1). The total number of samples used is LNA ·M = Õ ( LNA ∆2 ) . B.3 PROOF OF THEOREM 4 Proof. Without loss of generality, assume that ∆ < 0.1. Consider the following instance where A1 = · · · = AN = [A]: ui(ai) = ∆ · 1[ai = 1], (i ̸= j) uj(aj , a−j) = { ∆ · 1[aj = 1] (a−j ̸= {1}N−1) ∆ · 1[aj = 1] + 2∆ · 1[aj = a] (a−j = {1}N−1) . Denote this instance by Gj,a. Additionally, define the following instance G0: ui(ai) = ∆ · 1[ai = 1]. (∀i ∈ [N ]) As before, a payoff with expectation u is realized as a random variable with distribution 2Ber( 1+u2 )−1. It can be seen that the only difference between G0 and Gj,a lies in uj(a, {1}N−1). By the KLdivergence chain rule, for any algorithm O, KL (O(G0)∥O(Gj,a)) ≤ 10∆2 · EG0 [ n(aj = a, a−j = {1}N−1) ] , where n(aj = a, a−j = {1}N−1) denotes the number of times the action profile (a, 1N−1) is played. Note that in G0, the only action profile surviving two rounds of ∆-IDE is (1, · · · , 1), while in Gj,a, the only rationalizable action profile is (1, · · · , 1︸ ︷︷ ︸ j−1 , a, 1, · · · , 1). To guarantee 0.9 accuracy, by Pinsker’s inequality, KL (O(G0)||O(Gj,a)) ≥ 1 2 |O(G0)−O(Gj,a)|2 > 1. It follows that ∀j ∈ [N ], a > 1, EG0 [ n(aj = a, a−j = {1}N−1) ] ≥ 1 10∆2 . Thus the total expected sample complexity is at least∑ a>1,j∈[N ] EG0 [ n(aj = a, a−j = {1}N−1) ] ≥ N(A− 1) 10∆2 . C OMITTED PROOFS IN SECTION 4 We start our analysis by bounding the sampling noise. For player i ∈ [N ], action ai ∈ Ai, and τ ∈ [T ], we denote the sampling noise as ξ (τ) i (ai) := u (τ) i (ai)− ui(ai, θ (τ) −i ). We have the following lemma. Lemma C.1. Let Ω1 denote the event that for all t ∈ [T ], i ∈ [N ], and ai ∈ Ai,∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai) ∣∣∣∣∣ ≤ 2 √√√√ln(ANT/δ) t∑ τ=1 1 Mτ . Then Pr[Ω1] ≥ 1− δ. Proof. Note that ∑t τ=1 ξ (τ) i (ai) can be written as the sum of ∑t τ=1 Mτ mean-zero bounded terms. By Azuma-Hoeffding inequality, with probability at least 1 − δANT , for a fixed i ∈ [N ], t ∈ [T ], ai ∈ Ai, ∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai) ∣∣∣∣∣ ≤ 2 √√√√ln(ANT/δ) t∑ τ=1 Mτ · ( 1 Mτ )2 . (6) A union bound over i ∈ [N ], t ∈ [T ], ai ∈ Ai proves the statement. Lemma C.2. With probability at least 1− 2δ, for all t ∈ [T ] and all i ∈ [N ], ai ∈ Ai ∩ EL, θ (t) i (ai) ≤ p. Proof. We condition on the event Ω1 defined in Lemma C.1 and the success of Algorithm 1. We prove the claim by induction in t. The base case for t = 1 holds directly by initialization. Now we assume the case for 1, 2, . . . , t holds and consider the case of t+ 1. Consider a fixed player i ∈ [N ] and iteratively dominated action ai ∈ Ai ∩EL. By definition there exists a mixed strategy xi such that for all a−i ∩ EL = ∅, ui(xi, a−i) ≥ ui(ai, a−i) + ∆. Therefore for τ ∈ [t], by the induction hypothesis for τ , ui(xi, θ (τ) −i ) ≥ ui(ai, θ (τ) −i ) + (1−ANp) ·∆−ANp ≥ ui(ai, θ(τ)−i ) + ∆/2. (7) Consequently, t∑ τ=1 (u (τ) i (xi)− u (τ) i (ai)) ≥ t∑ τ=1 (ui(xi, θ (τ) −i )− ui(ai, θ (τ) −i ))− 4 · √√√√ln(ANT/δ) t∑ τ=1 1 Mτ (By (6)) ≥ t∆ 2 − 4 · √√√√ln(ANT/δ) t∑ τ=1 1 Mτ (By (7)) ≥ t∆ 4 . Therefore by our choice of learning rate, θ (t+1) i (ai) ≤ exp ( −ηt · t∑ τ=1 ( u (τ) i (xi)− u (τ) i (ai) )) ≤ exp ( −4 ln(1/p) ∆t · ∆t 4 ) = p. Therefore θ (t+1) i (ai) ≤ p as desired. Now we turn to the ϵ-CCE guarantee. For a player i ∈ [N ], recall that the regret is defined as RegretiT = max θ∈∆(Ai) T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩. Lemma C.3. The regret can be bounded as RegretiT ≤ O (√ lnA · T + ln(1/p) lnT ∆ ) . Proof. Note that apart from the choice of θ(1), we are exactly running FTRL with learning rates ηt = max {√ lnA/t, 4 ln(1/p) ∆t } , which are monotonically decreasing. Therefore following the standard analysis of FTRL (see, e.g., Orabona (2019, Corollary 7.9)), we have max θ∈∆(Ai) T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ≤ 2 + lnA ηT + 1 2 T∑ t=1 ηt ≤ 2 + √ lnA · T + 1 2 T∑ t=1 (√ lnA t + 4 ln(1/p) ∆t ) = O (√ lnA · T + ln(1/p) lnT ∆ ) . However, this form of regret cannot directly imply approximate CCE. We define the following expected version regret Regreti,⋆T = max θ∈∆(Ai) T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩. The next lemma bound the difference between these two types of regret Lemma C.4. The following event Ω2 holds with probability at least 1− δ: for all i ∈ [N ]∣∣∣Regreti,⋆T − RegretiT ∣∣∣ ≤ O (√T · ln(NA/δ)) . Proof. We denote Θi := {e1, e2, . . . , e|Ai|} Therefore we have∣∣∣Regreti,⋆T − RegretiT ∣∣∣ = ∣∣∣∣∣ maxθ∈∆(Ai) T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩ − max θ∈∆(Ai) T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ∣∣∣∣∣ = ∣∣∣∣∣maxθ∈Θi T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩ −max θ∈Θi T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ∣∣∣∣∣ =max θ∈Θi ∣∣∣∣∣ T∑ t=1 ⟨ui(·, θ(t)−i), θ − θ (t) i ⟩ − T∑ t=1 ⟨u(t)i , θ − θ (t) i ⟩ ∣∣∣∣∣ =max θ∈Θi ∣∣∣∣∣ T∑ t=1 ⟨ui(·, θ(t)−i)− u (t) i , θ − θ (t) i ⟩ ∣∣∣∣∣ Note that ⟨ui(·, θ(t)−i) − u (t) i , θ − θ (t) i ⟩ is a bounded martingale difference sequence. By AzumaHoeffding’s inequality, for a fixed θ ∈ Θi, with probability at least 1− δAN ,∣∣∣∣∣ T∑ t=1 ⟨ui(·, θ(t)−i)− u (t) i , θ − θ (t) i ⟩ ∣∣∣∣∣ ≤ O (√T · ln(NA/δ)) Thus we complete the proof by a union bound. Proof of Theorem 6. We condition on event Ω1 defined Lemma C.1, event Ω2 defined in Lemma C.4, and the success of Algorithm 1. Coarse Correlated Equilibria. By Lemma C.3 and Lemma C.4 we know that for all i ∈ [N ], Regreti,⋆T ≤ O (√ lnA · T + ln(1/p) lnT ∆ + √ T · ln(NA/δ) ) . Therefore choosing T = Θ ( ln(NA/δ) ϵ2 + ln2(NA/∆ϵδ) ∆ϵ ) will guarantee that Regreti,⋆T is at most ϵT/2 for all i ∈ [N ]. In this case the average strategy ( ∑T t=1⊗Ni=1θ (t) i )/T would be an (ϵ/2)-CCE. Finally, in the clipping step, ∥θ̄(t)i −θ (t) i ∥1 ≤ 2pA ≤ ϵ4N for all i ∈ [N ], t ∈ [T ]. Thus for all t ∈ [T ], we have ∥ ⊗ni=1 θ̄ (t) i −⊗ni=1θ (t) i ∥1 ≤ ϵ4 , which further implies∥∥∥∥∥( T∑ t=1 ⊗ni=1θ̄ (t) i )/T − ( T∑ t=1 ⊗ni=1θ (t) i )/T ∥∥∥∥∥ 1 ≤ ϵ 4 . Therefore the output strategy Π = ( ∑T t=1⊗Ni=1θ̄ (t) i )/T is an ϵ-CCE. Rationalizability. By Lemma C.2, if a ∈ EL ∩ Ai, θ(t)i (a) ≤ p for all t ∈ [T ]. It follows that θ̄ (t) i (a) = 0, i.e., the action would not be the support in the output strategy Π = ( ∑T t=1⊗Ni=1θ̄ (t) i )/T . Sample complexity. The total number of full-information queries is T∑ t=1 Mt ≤ T + T∑ t=1 64 ln(ANT/δ) ∆2t ≤ T + Õ ( 1 ∆2 ) = Õ ( 1 ∆2 + 1 ϵ2 ) . The total sample complexity for CCE learning would then be NA · T∑ t=1 Mt = Õ ( NA ϵ2 + NA ∆2 ) . Finally consider the cost of finding one IDE-surviving action profile (Õ ( LNA ∆2 ) ) and we get the claimed rate. D OMITTED PROOFS IN SECTION 5 Similar to the CCE case we first bound the sampling noise. For action ai ∈ Ai, and τ ∈ [T ], we denote the sampling noise as ξ (τ) i (ai) := u (τ) i (ai)− ui(ai, θ (τ) −i ). In the CE case, we are interested in the weighted sum of noise ∑t τ=1 ξ (τ) i (ai)θ (τ) i (bi), which is bounded in the following lemma. Lemma D.1. The following event Ω3 holds with probability at least 1− δ: for all t ∈ [T ], i ∈ [N ], and ai ∈ Ai, ∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai)θ (τ) i (bi) ∣∣∣∣∣ ≤ ∆4 t∑ τ=1 θ (τ) i (bi). Proof. Note that ∑t τ=1 ξ (τ) i (ai)θ (τ) i (bi) can be written as the sum of ∑t τ=1 M τ i mean-zero bounded terms. Precisely, there are Mτi terms bounded by θ (τ) i (bi) Mτi . By the Azuma-Hoeffding inequality, we have that with probability at least 1− δA2NT ,∣∣∣∣∣ t∑ τ=1 ξ (τ) i (ai)θ (τ) i (bi) ∣∣∣∣∣ ≤ 2 · √√√√ln(ANT/δ) t∑ τ=1 Mτi · ( θ (τ) i (bi) Mτi )2 = 2 · √√√√ln(ANT/δ) t∑ τ=1 (θ (τ) i (bi)) 2 Mτi ≤ ∆ 4 · √√√√ t∑ τ=1 θ (τ) i (bi) τ∑ j=1 θ (j) i (bi) ≤ ∆ 4 t∑ τ=1 θ (τ) i (bi) Therefore by a union bound we complete the proof. Lemma D.2. With probability at least 1− 2δ, for all t ∈ [T ], all i ∈ [N ], and all ai ∈ Ai ∩ EL, θ (t) i (ai) ≤ p Proof. We condition on the event Ω3 defined in Lemma D.1 and the success of Algorithm 1. We prove the claim by induction in t. The base case for t = 1 holds directly by initialization. Now we assume the case for 1, 2, . . . , t holds and consider the case of t+ 1. Consider a fixed player i ∈ [N ], an iteratively dominated action ai ∈ Ai ∩ EL, and an expert bi. By definition there exists a mixed strategy xi such that for all a−i ∩ EL = ∅, ui(xi, a−i) ≥ ui(ai, a−i) + ∆ Therefore for τ ∈ [t], by induction hypothesis we have ui(xi, θ (τ) −i ) ≥ ui(ai, θ (τ) −i ) + (1−ANp) ·∆−ANp ≥ ui(ai, θ(τ)−i ) + ∆/2 Thus we have t∑ τ=1 (u (τ) i (xi)− u (τ) i (ai)) · θ (τ) i (bi) ≥ t∑ τ=1 (ui(xi, θ (τ) −i )− ui(ai, θ (τ) −i )) · θ (τ) i (bi)− ∆ 4 t∑ τ=1 θ (τ) i (bi) ≥∆ 2 t∑ τ=1 θ (τ) i (bi)− ∆ 4 t∑ τ=1 θ (τ) i (bi) = ∆ 4 t∑ τ=1 θ (τ) i (bi) By our choice of learning rate, θ̂ (t+1) i (ai|bi) ≤ exp ( −ηbit,i · t∑ τ=1 θ (τ) i (bi) ( u (τ) i (xi)− u (τ) i (ai) )) ≤ exp ( − 4 ln(1/p) ∆ ∑t τ=1 θ (τ) i (b) · ∆ 4 t∑ i=1 θ (τ) i (b) ) = p. Therefore we conclude θ (t+1) i (ai) = ∑ bi∈Ai θ̂ (t+1) i (ai|bi)θ (t+1) i (bi) ≤ p Now we turn to the ϵ-CE guarantee. For a player i ∈ [N ], recall that the swap-regret is defined as SwapRegretiT := sup ϕ:Ai→Ai T∑ t=1 ∑ b∈Ai θ (t) i (b)u (t) i (ϕ(b))− T∑ t=1 〈 θ (t) i , u (t) i 〉 . Lemma D.3. For all i ∈ [N ], the swap-regret can be bounded as SwapRegretiT ≤ O (√ A ln(A)T + A ln(NAT/∆ϵ)2 ∆ ) . Proof. For i ∈ [N ], recall that the regret for an expert b ∈ Ai is defined as Regreti,bT := max a∈Ai T∑ t=1 θ (t) i (b)u (t) i (a)− T∑ t=1 〈 θ̂ (t) i (·|b), θ (t) i (b)u (t) i 〉 . Since θ(t)i (a) = ∑ b∈Ai θ̂ (t) i (a|b)θ (t) i (b) for all a and all t > 1,∑ b∈Ai Regreti,bT = ∑ b∈Ai max ab∈Ai T∑ t=1 θ (t) i (b)u (t) i (ab)− ∑ b∈Ai T∑ t=1 〈 θ̂ (t) i (·|b)θ (t) i (b), u (t) i 〉 = max ϕ:Ai→Ai ∑ b∈Ai T∑ t=1 θ (t) i (b)u (t) i (ϕ(b))− T∑ t=1 〈∑ b∈Ai θ̂ (t) i (·|b)θ (t) i (b), u (t) i 〉 ≥ max ϕ:Ai→Ai T∑ t=1 ∑ b∈Ai θ (t) i (b)u (t) i (ϕ(b))− T∑ t=2 〈 θ (t) i , u (t) i 〉 − 1 ≥ SwapRegretiT − 1. It now suffices to control the regret of each individual expert. For expert b, we are essentially running FTRL with learning rates ηbt,i := max { 4 ln(1/p) ∆ ∑t τ=1 θ (τ) i (b) , √ A lnA√ t } , which are clearly monotonically decreasing. Therefore using standard analysis of FTRL (see, e.g., Orabona (2019, Corollary 7.9)), Regreti,bT ≤ lnA ηbT,i + T∑ t=1 ηbt,i · θ (t) i (b) 2 ≤ √ T lnA A + T∑ t=1 θ (t) i (b) · √ A lnA t + 4 ln(1/p) ∆ · T∑ t=1 θ (t) i (b)∑t τ=1 θ (τ) i (b) ≤ √ T lnA A + T∑ t=1 θ (t) i (b) · √ A lnA t + 4 ln(1/p) ∆ ( 1 + ln ( T p )) . Here we used the fact that ∀b ∈ Ai, θ(1)i (b) ≥ p, and T∑ t=1 θ (t) i (b)∑τ i=1 θ (τ) i (b) ≤ 1 + ∫ ∑T t=1 θ (t) i (b) θ (1) i (b) ds s = 1 + ln (∑T t=1 θ (t) i (b) θ (1) i (b) ) ≤ 1 + ln ( T p ) . Notice that ∑ b∈Ai ∑T t=1 θ (t) i (b) · √ A lnA t ≤ O( √ A ln(A)T ). Therefore SwapRegretiT ≤ O(1) + ∑ b∈Ai Regreti,bT ≤ O (√ A ln(A)T + A ln(NAT/∆ϵ)2 ∆ ) . (8) Similar to the CCE case,, this form of regret can not directly imply approximate CE. We define the following expected version regret SwapRegreti,⋆T := sup ϕ:Ai→Ai T∑ t=1 〈 ϕ ◦ θ(t)i , ui(·, θ (t) −i) 〉 − T∑ t=1 〈 θ (t) i , ui(·, θ (t) −i) 〉 The next lemma bound the difference between these two types of regret Lemma D.4. The following event Ω4 has probability at least 1− δ: for all i ∈ [N ],∣∣∣SwapRegreti,⋆T − SwapRegretiT ∣∣∣ ≤ O (√ AT ln ( AN δ )) . Proof. Note that∣∣∣SwapRegreti,⋆T − SwapRegretiT ∣∣∣ = ∣∣∣∣∣ supϕ:Ai→Ai T∑ t=1 〈 ϕ ◦ θ(t)i − θ (t) i , ui(·, θ (t) −i) 〉 − sup ϕ:Ai→Ai T∑ t=1 〈 ϕ ◦ θ(t)i − θ (t) i , u (t) i 〉∣∣∣∣∣ ≤ sup ϕ:Ai→Ai ∣∣∣∣∣ T∑ t=1 〈 ϕ ◦ θ(t)i − θ (t) i , ui(·, θ (t) −i)− u (t) i 〉∣∣∣∣∣ . Notice that E[u(t)i ] = ui ( ·, θ(t)−i ) , and that u(t)i ∈ [−1, 1]A. Therefore, ∀ϕ : Ai → Ai, ξϕt := 〈 ϕ ◦ θ(t)i − θ (t) i , ui ( ·, θ(t)−i ) − u(t)i 〉 is a bounded martingale difference sequence. By AzumaHoeffding inequality, for a fixed ϕ : Ai → Ai, with probability 1− δ′,∣∣∣∣∣ T∑ t=1 ξϕt ∣∣∣∣∣ ≤ 2 √ 2T ln ( 2 δ′ ) . By setting δ′ = δ/(NAA), we get with probability 1− δ/N , ∀ϕ : Ai → Ai,∣∣∣∣∣ T∑ t=1 ξϕt ∣∣∣∣∣ ≤ 2 √ 2AT ln ( 2AN δ ) . Therefore we complete the proof by a union bound over i ∈ [N ]. Proof of Theorem 12. We condition on event Ω3 defined Lemma D.1, event Ω4 defined in Lemma D.4, and the success of Algorithm 1. Correlated Equilibrium. By Lemma D.3 and Lemma D.4 we know that for all i ∈ [N ], SwapRegreti,⋆T ≤ O (√ A ln(A)T + A ln(NAT/∆ϵ)2 ∆ + √ AT ln ( AN δ )) . Therefore choosing T = Θ ( A ln ( AN δ ) ϵ2 + A ln3 ( NA ∆ϵδ ) ∆ϵ ) will guarantee that SwapRegreti,⋆T is at most ϵT/2 for all i ∈ [N ]. In this case the average strategy ( ∑T t=1⊗Ni=1θ (t) i )/T would be an ϵ/2-CE. Finally, in the clipping step, ∥θ̄(t)i −θ (t) i ∥1 ≤ 2pA ≤ ϵ4N for all i ∈ [N ], t ∈ [T ]. Thus for all t ∈ [T ], we have ∥ ⊗ni=1 θ̄ (t) i −⊗ni=1θ (t) i ∥1 ≤ ϵ4 , which further implies∥∥∥∥∥( T∑ t=1 ⊗ni=1θ̄ (t) i )/T − ( T∑ t=1 ⊗ni=1θ (t) i )/T ∥∥∥∥∥ 1 ≤ ϵ 4 . Therefore the output strategy Π = ( ∑T t=1⊗Ni=1θ̄ (t) i )/T is an ϵ-CE. Rationalizability. By Lemma D.2, if a ∈ EL ∩ Ai, θ(t)i (a) ≤ p for all t ∈ [T ]. It follows that θ̄ (t) i (a) = 0, i.e., the action would not be the support in the output strategy Π = ( ∑ t⊗iθ̄ (t) i )/T . Sample complexity. The total number of queries is∑ i∈[N ] T∑ t=1 AM (t) i ≤ NAT + ∑ i∈[N ] ∑ b∈Ai T∑ t=1 16θ (t) i (b) ∆2 · ∑t τ=1 θ (τ) i (b) ≤ NAT + 16NA 2 ∆2 · ln(T/p) ≤ Õ ( NA2 ϵ2 + NA2 ∆2 ) , where we used the fact that T∑ t=1 θ (t) i (a)∑τ i=1 θ (τ) i (a) ≤ 1 + ln ( T p ) . Finally consider the cost of finding one IDE-surviving action profile (Õ ( LNA ∆2 ) ) and we get the claimed rate. E DETAILS FOR REDUCTION ALGORITHMS In this section, we present the details for the reduction based algorithm for finding rationalizable CE (Algorithm 5) and analysis of both Algorithm 4 and 5. E.1 RATIONALIZABLE CCE VIA REDUCTION We will choose ϵ′ = min{ϵ,∆}3 , M = ⌈ 4 ln(2NA/δ) ϵ′2 ⌉ . Lemma E.1. With probability 1−δ, throughout the execution of Algorithm 4, for every t and i ∈ [N ], a′i ∈ Ai, |ûi(a′i,Π−i)
1. What is the focus of the paper regarding learnability in multiplayer normal-form games? 2. What are the strengths of the proposed algorithms, particularly in finding rationalizable action profiles and correlated equilibria? 3. What are the weaknesses of the paper, especially regarding the motivation and practical uses of the derived sample complexities? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or questions about the presentation of the theory and its connections to prior works?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies learnability of (coarse) correlated and Nash equilibria supported on rationalizable actions in unknown multiplayer normal-form games. After introducing the relevant concepts, the authors derive the first main result: a probabilistic bandit algorithm that identifies a rationalizable action profile using a number of samples inversely quadratic in an elimination reward gap and linear in the number of players (N), maximum number of actions (A), and a minimum number of iterations to eliminate dominated actions. This result serves as a stepping stone to the next two algorithms, finding rationalizable correlated and coarse correlated equilibria with sample complexities extending the action-profile-algorithm with additional (at most quadratic) terms. As a side-product, the authors obtain also a sample complexity of finding rationalizable Nash equilibria in two-player zero-sum games. The last main result consists of two algorithms capable of “rationalizing” an arbitrary black-box algorithm for computing correlated or coarse correlated equilibrium using N*A calls to the black-box and a number of additional samples at most quadratic (in case of coarse correlation) or cubic (in case of standard correlation) in the game’s parameters. Strengths And Weaknesses I find this work very well written in terms of both quality of prose and the overall exposition. All notions the authors introduce are well described, and the notation is clear and easily comprehensible. The paper also competently explains how the presented work fits into the literature on the topic. The theory seems solid, I briefly went through the proofs in the appendix and as far as I can tell, they apper to be correct. I especially liked the black-box techniques the authors present. What I am missing a bit in the main text is some more in-depth analysis of the motivation that would highlight the importance of this work, perhaps some illustrative examples … something along the lines of showing what could actually happen in practice if not rationalizable (C)CEa are adopted? What is a possible degradation in terms of utility? I assume I may find some analysis in the cited works, but I can’t shake the feeling of being slightly uncertain of how important this problem truly is. My other concern relates to the potential uses of the derived sample complexities and how to estimate them in practice. If I understand this work correctly, the authors work in the bandit setting when the underlying game is unknown. While N and A are given, and delta could be chosen, how could the “L” be computed in the bandit case, given that you need it to run Algorithm 1 (line 2)? Even if the original utilities are fully observable, how difficult is it to compute “L”? If I am not wrong, this definition of strict dominance is with respect to mixed strategies, so calculating “L” corresponds to iteratively resolve the linear programs from the original work by Vincent Conitzer. How difficult is it in contrast to the algorithms presented in this paper? Or is “L” not supposed to be computed in practice? If yes, are there some reasonable bounds on “L” that would allow us to estimate the expected solution quality given some supply of samples? Finally, just a few nits: (i) The introduction presents the sample complexities without explaining at all what delta is. (ii) The delta symbol is then further used both as the “reward gap” in the definition of rationalizability and to denote distributions over some set … Is there some connection between the two that would justify the use of the same notation that I am missing? (iii) FTRL remains undefined on page 2, I assume this is the follow-the-regularized-leader. Clarity, Quality, Novelty And Reproducibility Clarity: the paper is very well written and easily understandable. Quality: the results seem correct. Novelty: sufficient. Reproducibility: this work does not contain any experimental results.
ICLR
Title Contrastive Consistent Representation Distillation Abstract The combination of knowledge distillation with contrastive learning has great potential to distill structural knowledge. Most of the contrastive-learning-based distillation methods treat the entire training dataset as the memory bank and maintain two memory banks, one for the student and one for the teacher. Besides, the representations in the two memory banks are updated in a momentum manner, leading to representation inconsistency. In this work, we propose Contrastive Consistent Representation Distillation (CoCoRD) to provide consistent representations for efficient contrastive-learning-based distillation. Instead of momentum-updating the cached representations, CoCoRD updates the encoders in a momentum manner. Specifically, the teacher is equipped with a momentum-updated projection head to generate consistent representations. The teacher representations are cached in a fixed-size queue which serves as the only memory bank in CoCoRD and is significantly smaller than the entire training dataset. Additionally, a slow-moving student, implemented as a momentum-based moving average of the student, is built to facilitate contrastive learning. CoCoRD, which utilizes only one memory bank and much fewer negative keys, provides highly competitive results under typical teacher-student settings. On ImageNet, CoCoRD-distilled ResNet50 outperforms the teacher ResNet101 by 0.2% top-1 accuracy. Furthermore, in PASCAL VOC and COCO detection, the detectors whose backbones are initialized by CoCoRDdistilled models exhibit considerable performance improvements. 1 INTRODUCTION The remarkable performance of convolutional neural networks (CNNs) in various computer vision tasks, such as image recognition (He et al., 2016; Huang et al., 2017) and object detection (Girshick, 2015; Ren et al., 2015; Redmon & Farhadi, 2017), has triggered interest in employing these powerful models beyond benchmark datasets. However, the cutting-edge performance of CNNs is always accompanied by substantial computational costs and storage consumption. Early study has suggested that shallow feedforward networks can approximate arbitrary functions (Hornik et al., 1989). Numerous endeavors have been made to reduce computational overheads and storage burdens. Among those endeavors, Knowledge Distillation, a widely discussed topic, presents a potential solution by training a compact student model with knowledge provided by a cumbersome but well-trained teacher model. The majority of distillation methods induce the student to imitate the teacher representations (Zagoruyko & Komodakis, 2017; Park et al., 2019; Tian et al., 2020; Hinton et al., 2015; Chen et al., 2021b;c; Yim et al., 2017; Tung & Mori, 2019; Ahn et al., 2019). Although representations provide more learning information, the difficulty of defining appropriate metrics to align the student representations to the teacher ones challenges the distillation performance. Besides, failing to capture the dependencies between representation dimensions results in lame performance. To enhance performance, researchers attempt to distill structural knowledge by establishing connections between knowledge distillation and contrastive learning (Tian et al., 2020; Chen et al., 2021b). To efficiently retrieve representations of negative samples for contrastive learning, memory banks cache representations which are updated in a momentum manner, as shown in Fig. 1. However, the student is optimized sharply by the training optimizer. The student representations in the memory bank are inconsistent because the updated representations differ from those not updated in that iteration. Therefore, the student can easily contrast the positive and negative samples, keeping the student from learning good features. The storage size of memory bank is another factor of concern when applying contrastive-learning-based distillation methods. As in (Tian et al., 2020; Chen et al., 2021b), there are two memory banks and each of them contains representations of all training images, leading to massive GPU memory usage on large-scale datasets. Motivated by the discussion above, we propose Contrastive Consistent Representation Distillation (CoCoRD) as a novel way of distilling consistent representations with one fixed-size memory bank. Specifically, CoCoRD is composed of four major components, as shown in Fig. 2: (1) a fixed-size queue which is referred to as the teacher dictionary, (2) a teacher, (3) a student, and (4) a slow-moving student. From a perspective of considering contrastive learning as a dictionary look-up task, the teacher dictionary is regarded as the memory bank, where all the representations serve as the negative keys. The encoded representations of the current batch from the teacher are enqueued. Once the queue is full of representations, the oldest ones are dequeued. By introducing a queue, the size of the memory bank is decoupled from dataset size and batch size, allowing it to be considerably smaller than dataset size and larger than the commonly-used batch size. The student is followed by a projection head, which maps the student features to a representation space. The teacher projection head is initialized the same as the student one and is a momentum moving average of the student projection head if the teacher and the student have the same feature dimension; otherwise, the teacher projection head is randomly initialized and not updated. Since the contrast through the teacher dictionary is to draw distinctions on an instance level, the cached teacher representations which share the same class label as the student ones are mistakenly treated as negative keys, resulting in noise in the dictionary. To alleviate the impact of the noise, a slow-moving student, implemented as a momentum moving average of the student, is proposed to pull together anchor representations and class-positive ones. As shown in Fig. 2, with a momentum-updated projection head, the slow-moving student projects a data augmentation version of the anchor image to the representation space, which serves as the instance-negative but class-positive key. The main contributions are listed as follows: • We utilize only one lightweight memory bank (teacher dictionary), where all the representations are treated as negative keys. We experimentally demonstrate that a miniature teacher dictionary with much fewer negative keys can be sufficient for contrastive learning in knowledge distillation. • We equip the well-trained teacher with a momentum-updated projection head to provide consistent representations for the teacher dictionary. Besides, a slow-moving student provides class-positive representations to alleviate the impact of the potential noise in the teacher dictionary. • We verify the effectiveness of CoCoRD by achieving the state-of-the-art performance in 11 out of 13 student-teacher combinations in terms of model compression. On ImageNet, the CoCoRDdistilled ResNet50 can outperform the teacher ResNet101 by 0.2% top-1 accuracy. Moreover, we initialize the backbones in object detection with CoCoRD-distilled weights and observe considerable performance improvements over the counterparts that the vanilla students initialize. 2 RELATED WORK 2.1 KNOWLEDGE DISTILLATION Hinton et al. (Hinton et al., 2015) first propose distilling the softened logits from the teacher to the student. After the representative work (Hinton et al., 2015), various knowledge distillation methods (Wang & Li, 2021; Song et al., 2021; Passban et al., 2020; Chen et al., 2021a;c) aim to distill more informative knowledge via intermediate features. Among them, Passban et al. (Passban et al., 2020) fuse all teacher information to avoid the loss of significant knowledge. Chen et al. (Chen et al., 2021a) propose semantic calibration based on the attention mechanism for adaptively assigning cross-layer knowledge. Chen et al. (Chen et al., 2021c) introduce a novel framework via knowledge review in which the knowledge of multiple layers in the teacher can be distilled for supervising one layer of the student. However, the methods mentioned above have difficulty in defining appropriate metrics to measure the distance between the student representations and the counterparts from the teacher. There are a few recent works exploiting the dependencies between representation dimensions based on contrastive learning (Tian et al., 2020; Chen et al., 2021b) for boosting the distillation performance. In particular, Tian et al. (Tian et al., 2020) formulate capturing structural knowledge as contrastive learning and maximize the lower bound of mutual information between the teacher and the student. Chen et al. (Chen et al., 2021b) leverage primal and dual forms of Wasserstein distance, where the dual form yields a contrastive learning objective. In summary, the core of knowledge distillation lies in the definition of knowledge and the way the knowledge is distilled. 2.2 CONTRASTIVE LEANING The main goal of contrastive learning is to learn a representation space where anchor representations stay close to the representations of the positive samples and distant from those of the negative samples. Contrastive learning is a powerful approach in self-supervised learning. To learn powerful feature representations in an unsupervised fashion, Wu et al. (Wu et al., 2018) consider each instance as a distinct class of its own and use noise contrastive estimation (NCE) to tackle the computational challenges. Contrastive learning is first combined with knowledge distillation by CRD (Tian et al., 2020), which aims at exploring structural knowledge. In addition to CRD (Tian et al., 2020), WCoRD (Chen et al., 2021b) combines LCKT (Chen et al., 2021b) and GCKT (Chen et al., 2021b) based on Wasserstein dependency measure in contrastive learning (Ozair et al., 2019). However, the memory banks in CRD and WCoRD contain representations of all the training images, which bring about storage challenges on large-scale datasets. Besides, momentum updates to representations can also lead to inconsistent representations that negatively affect the distillation performance. From a perspective of considering contrastive learning as a dictionary lookup task, we implement the memory bank as a first-in-first-out queue where all included representations serve as negative keys. 3 METHOD The key idea of combining knowledge distillation with contrastive learning is straightforward. With knowledge distillation, a proficient teacher can provide consistent representations that are beneficial for contrastive learning. With contrastive learning, the student can obtain powerful features whose representations are close to the positive teacher representations and distant from the negative ones in a representation space. Contrastive learning can be generally formulated as a dictionary look-up task. Given a query q and a dictionary K with N keys: K = {k1, · · · , kN}, contrastive learning matches the query q to the positive key k+and pushes q away from the negative keys cached in K. 3.1 CONTRAST AS LOOKING UP IN THE TEACHER DICTIONARY In CoCoRD, the negative keys are encoded by the teacher and cached in a fixed-size queue which is referred to as the teacher dictionary. Given an input image x, two views of x under random data augmentations form a positive pair (a query and a positive sample), which is encoded in each iteration. We define the input to the student S as the query xs and the input to the teacher T as the positive sample xt. The outputs at the penultimate layer (before the last fully-connected layer) are projected to a representation space by a projection head. For simplicity of notation, the student nested functions up to the penultimate layer are denoted as gs(·) and the student projection head is denoted as fps (·). Therefore, the query representations qs and the positive keys k+t are given by: qs = f p s (gs(xs)), k + t = f p t (gt(xt)), (1) where gt(·) denotes the teacher nested functions up the penultimate layer and fpt (·) is the teacher projection head. fps and f p t are two-layer perceptrons. Besides, the cached i-th negative key in the queue is denoted as k-ti which is produced the same way as k + t but from the preceding batches. The fixed-size teacher dictionary K={k-t1 , · · · , k - tN } contains N negative keys. The representations of the current batch are added to the queue, while the oldest representations are removed from the queue. The contrastive loss. The value of the contrastive loss should be small when qs is close to k+t and distant from k-ti in the representation space. To meet this condition, we consider the wildly-used and effective contrastive loss function: InfoNCE (Van den Oord et al., 2018): Lctr = − log exp(qs · k+t /τ) exp(qs · k+t /τ) + ∑N i=1 exp(qs · k-ti/τ) , (2) where τ is a hyper-parameter that controls the concentration level. N is the size of the teacher dictionary. Lctr can be intuitively interpreted as the log loss of a softmax-based (N+1)-way classification task. In our case, we attempt to classify qs as k+t in the scope of {k+t } ∪ {k-t1 , k - t2 , · · · , k - tN }. The consistency in the teacher dictionary. The introduction of the fixed-size teacher dictionary decouples the size of the memory bank from batch size and dataset size. The teacher dictionary can be larger than the commonly-used batch and smaller than the dataset. Therefore, we bypass huge batch, which aims at providing in-batch negative samples. Besides, we can avoid sampling inconsistent negative keys from the memory bank. The core to learning good features by contrastive learning lies in the rich and challenging negative representations. In CRD (Tian et al., 2020) and WCoRD (Chen et al., 2021b), the negative keys are momentum updated. The momentum update to the negative keys brings about two main issues: (1) the negative keys were updated only when they were last processed, and (2) the update interval for each negative key can be highly different. The two issues cause inconsistent negative keys. To provide consistent negative keys, we update the teacher projection head in a momentum manner. Since gt are frozen in the distillation framework, momentum updating the teacher projection head results in consistent negative keys. Specifically, denoting the parameters of fpt as ωt and those of f p s as ωs, we update ωt as: ωt ← mcωt + (1−mc)ωs. (3) mc ∈ [0, 1] is a momentum coefficient which adjusts the update smoothness. Since ωs are optimized by the training optimizer, the momentum update of ωt makes the teacher projection head f p t progress more smoothly than the student projection head fps . Therefore, the difference between the teacher projection heads at different iterations can be made small. As a result, the negative keys encoded at different iterations can be consistent. Besides, the teacher dictionary itself is gradually updated. The representations of the current batch are enqueued, while the representations of the oldest batch are dequeued. This gradual replacement is beneficial for maintaining the consistency of the queue since the oldest representations are the least consistent with the current ones. 3.2 REPRESENTATIONS OF ONE CLASS FLOCK TOGETHER As shown in Eq. 2, classifying qs as k+t in the scope of {k+t , k-t1 , k - t2 , · · · , k - tn} is a discrimination on an instance level. However, k-ti which shares the same class label with qs should be close to qs in the representation space. Simply rejecting those k-ti is not beneficial for the student learning good features. To bring qs closer to its instance-negative but class-positive keys, we introduce a slow-moving student whose nested functions up to the penultimate layer are denoted as g′s(·). Specifically, the slow-moving student is implemented as a momentum moving average of the student. The slow-moving student is also accompanied with a projection head f ′s, which is also updated in a momentum manner. Denoting the parameters of gs as θs, the parameters of g′s as θ ′ s and those of f ′ s as w ′ s, we update θ ′ s and w ′ s by: θ′s ← mrθ′s + (1−mr)θs w′s ← mrw′s + (1−mr)ws, (4) where mr ∈ [0, 1] is another momentum coefficient and ws denotes the parameters of the student projection head fps . Therefore, the instance-negative but class-positive keys q + s- can be obtain by: q+s- = f ′ s(g ′ s(x ′ s)), ◁ the instance-negative but class-positive keys (5) where x′s is another view of x under the random data augmentations. Instead of directly narrowing down the distance between qs and q+s-, we use qs to predict q + s-, which softens the constraint. Formally, a predictor hs, implemented as a two-layer perceptron, is proposed to produce the prediction ps ≜ hs(qs). The loss is simply defined as the mean squared error between l2 normalized ps and q+s-: Lpred = ∥ ps ∥ps∥2 − q + s∥q+s-∥2 ∥22 = 2− 2⟨ ps ∥ps∥2 , q+s∥q+s-∥2 ⟩. (6) Furthermore, we symmetrize the loss by feeding x′s to the student and xs to the slow-moving student to compute L̃pred. Formally, denoting the representations output from x′s by the student as q̃s and the corresponding instance-negative but class-positive keys as q̃+s-, we compute L̃pred by: L̃pred = ∥ p̃s ∥p̃s∥2 − q̃ + s∥q̃+s-∥2 ∥22 = 2− 2⟨ p̃s ∥p̃s∥2 , q̃+s∥q̃+s-∥2 ⟩. (7) Here p̃s ≜ hs(q̃s), q̃+s- ≜ f ′ s(g ′ s(xs)) and q̃s ≜ f p s (gs(x ′ s)). Note that q + s- and q̃ + s- are detached from the current computational graph during the distillation process. 3.3 TRAINING THE STUDENT With the slow-moving student and the teacher, Eq. 2, Eq. 6 and Eq. 7 aim at assisting the student to effectively learn powerful features through contrastive learning. The student still needs to learn features from the training data. For image classification, the task-specific loss is defined as the cross-entropy loss. Overall, the total loss Ltotal can be formulated as: Ltotal = λctrLctr + λpred(Lpred + L̃pred) + λclsLcls, (8) where λctr, λpred and λcls are three balancing factors. Lcls ≜ H(y, ys), where H(·) refers to the standard cross-entropy, y denotes the one-hotel label and ys is the student output. 4 EXPERIMENTS We validate the effectiveness of CoCoRD in improving the student performance. The student-teacher combinations are divided into two main categories: (1) students share the same architecture style with teachers, and (2) the architectures of the students are different from those of the teachers. Datasets. To investigate the performance improvements of students, we employ two benchmarks: (1) CIFAR100 (Krizhevsky et al., 2009) and (2) ImageNet-1K (Russakovsky et al., 2015). CIFAR100 has 100 classes and there are 500 training images and 100 validation images per class. ImageNet-1K, a large-scale dataset, contains 1000 classes and provides 1.28 million training images and 50K validation images. To test the transferability of features that students learn by CoCoRD, we utilize two more datasets: (1) STL-10 (Coates et al., 2011) and (2) TinyImageNet (Chrabaszcz et al., 2017). We only use the 5K labeled training images and 8K validation images from 10 classes in STL-10. TinyImageNet consists of 200 classes and each has 500 training images and 50 validation images. 4.1 EXPERIMENTS ON CIFAR100 We experiment on CIFAR100 with 13 student-teacher combinations in total1, 7 of which are studentteacher combinations with the same architecture style, and the remaining 6 are student-teacher combinations with different architectures. Table 1 focuses on student-teacher combinations with the same architecture style, while Table 2 provides experimental results of student-teacher combinations with different architectures. As can be observed in both tables, KD (Hinton et al., 2015), a simple yet effective method, provides a strong baseline. CoCoRD can consistently outperform KD and 1On CIFAR100, λctr=1, λcls=1, λpred=4. More training details and data augmentations are provided in the supplementary materials achieve highly competitive performance compared with other state-of-the-art methods. Note that mc in Formula 3 is set to 1 for the WRN-40-2/WRN-40-1 combination. Although the teacher projection head attached to WRN-40-2 is only randomly initialized and not updated during the distillation process, CoCoRD still achieves the state-of-the-art result. This implies the features provided by the well-trained teacher from the penultimate layer are already distinguishing, which are then projected into the representation space by the frozen teacher projection head fpt . Based on the discussion above, the teacher projection heads in Table 2 are randomly initialized since the difference in architecture style is very likely to bring about the difference in the input shape. Note that it is because of the projection heads that CoCoRD can achieve distillation under cross-architecture setting. The projection heads can project features at the penultimate layer of different shapes into one representation space, where we can easily define the contrastive loss based on Eq. 2. As shown in Table 2, CoCoRD is highly effective for combinations of different architectures. Even if the teacher projection head is not updated, CoCoRD can consistently achieve the best performance compared to those not combined with another method. Especially, for the resnet-32x4/ShuffleNetV2 pair, CoCoRD presents 77.28% Top-1 accuracy, which is 1.5% higher than the second best GCKT (75.78%). On the other hand, methods based on intermediate features perform poorly with differentarchitecture combinations. The observation suggests that CoCoRD can largely blur the requirement for significant similarities between students and teachers. We conjecture that knowledge distillation based on features at the penultimate layer can avoid the conflicts of different inductive biases that different models exploit. This indicates that the proposed CoCoRD is more generally applicable for student-teacher combinations with different architectures. Limitations. In Table 1, CoCoRD+KD does not bring further performance improvements over CoCoRD. The same phenomenon can be observed in Table 2. MobileNetV2 (Sandler et al., 2018) does not obtain more performance improvements with CoCoRD+KD. These phenomena indicate that further investigations are needed to combine CoCoRD with other knowledge distillation methods and extremely lightweight student models are still challenging for knowledge distillation. Linear probing. Following CRD (Tian et al., 2020), we employ linear probing to evaluate the transferability of the student features. We freeze the student and train a linear classifier on the global average pooling features of the student to perform a 10-way classification on STL10 and 200-way classification on TinyImageNet. As shown in Table. 3, CoCoRD exhibits strong transferability and outperform the second best (CRD+KD) by a large margin on the two datasets (2.04% improvement on STL10 and 2.32% on TinyImageNet). The proposed CoCoRD, which has a negligible performance drop on CIFAR100 compared with the teacher, (Please see Table. 1), shows better transferability than the teacher (5.32% improvement on STL10 and 6.01% TinyImageNet). The linear probing experiment indicates that CoCoRD-distilled models have better generalization ability. 4.2 EXPERIMENTS ON IMAGENET To investigate the scalability of CoCoRD to large-scale datasets, we employ ResNet-18 and ResNet-34 as the student-teacher combination to perform experiments on ImageNet-1K. For a fair comparison, we follow the standard PyTorch ImageNet training practice except that we have 100 training epochs like CRD and WCoRD. We also use the PyTorch-released ResNet-34/18 as our teacher/student. On ImageNet, we set λctr=1, λcls=1, λpred=4 and only calculate Lpred. The Top-1 and Top-5 error rates of different distillation methods are provided in Table 4 (the lower, the better). The results in Table 4 show that the proposed CoCoRD achieves the best performance on the large-scale ImageNet. The relative improvement of CoCoRD over WCoRD (Chen et al., 2021b) on Top-1 error is 14.45%, and the relative improvement of CoCoRD over CRD (Tian et al., 2020) on Top-1 error is 40.43%. Both improvements validate the scalability of the proposed CoCoRD to large-scale datasets. 4.3 ABLATION STUDY 4.3.1 STUDY OF ENCODER COMBINATIONS By default, we use the teacher to generate representations for contrastive learning and the slowmoving student is employed to produce representations of another view of the input of the student. To investigate how the representation quality affects the distillation performance, we utilize different models to provide those representations. For clarity, the model that generates dictionary-caching representations is referred to as contrastive encoder. The model that produces the instance-negative but class-positive representations is referred to as cognate encoder. Results are reported in Table 5. Comparing options A (the default option) and B, we can find that leveraging the pre-trained teacher to provide quality representations for contrastive learning is more beneficial for the distillation. Besides, removing the cognate encoder and setting λpred to zero (option E) lead to poor performance, suggesting the cognate encoder can alleviate the adverse impact of the potential noise. If we remove the contrastive encoder and still use the dictionary with cognate encoder (option F), the distillation process fails. The results in Table 5 can support the effectiveness of each encoder in CoCoRD. 4.3.2 STUDY OF MOMENTUM As shown in Formulas 3 and 4, mc controls the progressing speed of the teacher projection head f p t , while mr manages the speed of the slow-moving student and its projection head. To investigate the impact of momentum, we employ resnet110 as the teacher to train resnet32 with different mc and mr. The results are reported in Table 6. When mc=mr=0 and mc=mr=1, CoCoRD can improve the student performance. The effectiveness in both cases implies CoCoRD is robust. Besides, with mr fixed, a large value of mr (e.g. 0.99 or 0.999) works much better than mr=0, suggesting that consistent representations in the teacher dictionary are beneficial for the distillation. 4.3.3 STUDY OF HYPER-PARAMETERS The temperature τ . The value of τ in Eq. 2 varies from 0.07 to 0.11. As shown in Figure 3(a), CoCoRD is sensitive to τ . Both extremely high and low temperature lead to sub-optimal performance. As suggested in CRD (Tian et al., 2020), we set τ to 0.1 for experiments on CIFAR100, while τ is set to 0.07 on ImageNet. We suggest tuning the value of τ based on the classification difficulty. The size of the teacher dictionary. The number of negative keys is determined by the teacher dictionary size N . To investigate the effects of the teacher dictionary size, we validate various values of N . As shown in Figure 3(b), extremely small teacher dictionary provides insufficient negative keys, leading to sub-optimal performance. However, the extremely large teacher dictionary can introduce noise, which adversely affects the distillation performance. Based on our experiments, N=2048 should suffice on CIFAR100 while N= 65536 on ImageNet. Note that the teacher dictionary in CoCoRD is significantly smaller than the memory banks in CRD (Tian et al., 2020) and WCoRD (Chen et al., 2021b), which is more economic for large-scale datasets. The balancing factors. We conduct experiments on CIFAR100 to investigate the effects of the three balancing factors λctr, λcls and λpred. We use resnet32resnet110 as the student-teacher combination. For experiments on balancing factors, we set τ=0.1, N=2048, mc=0.999 and mr=0.9. “%” denotes we set the balance factor to 0 and “!” means we set the balance factor to the corresponding value provided in the second row. Details on simple grid search for each balancing factor can be found in the supplementary material. As we can see from Table. 7, all components in CoCoRD are essential for achieving high distillation performance. When λctr is set to 0, there is a serious performance drop, which indicates contrasting student representa- tions with negative keys in teacher dictionary is necessary in improving the student performance. Moreover, by comparing the result when λpred=0 with the result when λpred=4, we can see that the slow-moving student can reduce the negative effects of the noise in the teacher dictionary. 4.4 TRANSFER LEARNING We further validate the feature quality of CoCoRD-distilled models by transferring the model weights to object detection task, including PASCAL VOC (Everingham et al., 2010) and COCO detection (Lin et al., 2014). We fine-tune the pre-trained models in an end-to-end manner on the target datasets. The detector for PASCAL VOC is Faster R-CNN (Ren et al., 2015) with a backbone of R50-C4. For COCO object detection, the model is Mask-RCNN (He et al., 2017) with the R50-C4 backbone. Note that the CoCoRD-distilled ResNet50 can outperform the teacher ResNet101 by 0.2% top-1 accuracy on classification. As shown in Table 8, the CoCoRD-initialized detectors exhibit better performance than the student-initialized and CRD-initialized counterparts. The valid reuse of model weights further demonstrates the transferability of CoCoRD-distilled features. 5 CONCLUSION In this paper, we propose a contrastive-learning-based knowledge distillation method named Contrastive Consistent Representation Distillation. From a perspective of regarding contrastive learning as a dictionary looking-up task, we build a fixed-size dictionary to cache consistent teacher representations. Besides, to alleviate the adverse impact of the potential noise in the teacher dictionary, we employ a slow-moving student, implemented as a momentum-based moving average of the student, to provide instance-negative but class-positive targets. CoCoRD does not employ the entire dataset as the memory bank, which is economic for large-scale datasets. Extensive experiments demonstrate that CoCoRD, which utilizes fewer negative keys, can boost the performance of the students on diverse image classification datasets. Additionally, the models distilled by CoCoRD on ImageNet classification can efficiently improve object detection performance on PASCAL VOC and COCO. A APPENDIX A.1 QUANTITATIVE RESULTS ON THE ACHIEVED SPEED-UP, MEMORY REDUCE AND OTHERS In the following three tables, we provide quantitative results on the achieved speed-up, memory cost reduce, and other quantitative information about the teacher/student (T/S) combinations used on CIFAR100 (in Tabs. 1 and 2) and those T/S combinations used on ImageNet (Tabs. 4 and 8). The results are measured with Intel Core i7-8700 CPU on Ubuntu 20.04 operating system and memory cost is measured by Pytorch Profiler in a forward pass. Additionally, we compare the size of the teacher dictionary in the proposed CoCoRD with the size of the memory banks in CRD. Note that the keys in CRD memory banks are only 128-d while the keys in the proposed CoCoRD teacher dictionary are 2048-d. Even with higher dimensions of the stored keys, CoCoRD are still more storage efficient. A.2 THEORETICAL STUDY Given two deep neural networks, a teacher fT and a student fS , and let x be the network input. We denote representations at the penultimate layer as fT (x) and fS(x), respectively. We would like to bring fS(xi) and fT (xi) close while pushing apart fS(xi) and fT (xj) (xi and xj represent different training samples). For clear notation, we define variables S and T for the student representations and the teacher ones of the data, respectively: x ∼ p(x); S = fS(x); T = fT (x). Let us define a distribution q with variable C. The latent variable C decides whether the tuple (fS(xi), fT (xj)) is drawn form the joint distribution p(T, S) (when C=1) or drawn from the product of marginal distributions p(S)p(T ) (when C=0). q(T, S|C = 1) = p(T, S), q(T, S|C = 0) = p(T )p(S) Suppose we are given 1 congruent pair drawn from the joint distribution (i.e. the same input provided to T and S) for every N incongruent pairs drawn from the product of marginals (independent randomly inputs provided to T and S). Then the priors on the latent C are: q(C = 1) = 1 N + 1 , q(C = 0) = N N + 1 . By Bayes’ rule and simple manipulations, the posterior for C = 1 is given by: q(C = 1|T, S) = q(T, S|C = 1)q(C = 1) q(T, S|C = 0)q(C = 0) + q(T, S|C = 1)q(C = 1) = p(T, S) p(T, S) +Np(T )p(S) . We can observe a connection with mutual information: log q(C = 1|T, S) = − log(1 +N p(T )p(S) p(T, S) ) ≤ − log(N) + log p(T, S) p(T )p(S) . Taking expectation on both sides w.r.t p(T, S) and rearranging gives us: I(T ;S) ≥ log(N) + Eq(T,S|C=1) log q(C = 1|T, S), where I(T ;S) is the mutual information between the distributions of the teacher and student representations. Though we do not know the true distribution q(C = 1|T, S), a neural network can be used to estimate whether a pair comes from the joint distribution or the marginals. By maximizing KL divergence between the joint distribution p(T, S) and the product of marginal distributions p(T )p(S), we can maximize the mutual information between the student representations and the teacher representations.
1. What is the focus and contribution of the paper on combining knowledge distillation with contrastive learning? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its performance and limitations? 3. Do you have any concerns about the method's effectiveness or novelty compared to other works in the field? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper propose a new way to combine knowledge distillation with contastive learning, which updates the encoders in a momentum manner to obtain efficient and consistent encoding. Experiments on both classification and detection have been conducted. Strengths And Weaknesses Strength: Sufficient experiments have been conducted on both classification and detection. Good paper writing. Weakness: The performance improvements are not very significant. As claimed by the authors in the limitation section, a combination of their method with the previous KD does not lead to good performance. Lack of in-depth study on how the inconsistency of representations in the memory bank leads to negative influence. It will be better if quantitative results can be provided to study the relation between inconsistency and the performance of knowledge distillation. Lack of theoretical or experimental study on why does their method works. Some factors in their framework do not have enough ablation studies. For example, what if the branch of the slow-moving student is not used? SOTA knowledge distillation method usually distill the knowledge in not only the final layer but also the intermediate layers. However, in the proposed method, it seems that a projection head is required, which is usually only utilized at the final layer in previous contrastive learning methods. So I wonder whether this method can be used in knowledge distillation on the intermediate layers. In my opinion, the novelty of this paper is not that large, since the moving average trick has been widely used in contrastive learning, And KD+Contrastive Learning is also not novel. Clarity, Quality, Novelty And Reproducibility Moderate Quality, Moderate Novelty, Bad Reproducibility.
ICLR
Title Contrastive Consistent Representation Distillation Abstract The combination of knowledge distillation with contrastive learning has great potential to distill structural knowledge. Most of the contrastive-learning-based distillation methods treat the entire training dataset as the memory bank and maintain two memory banks, one for the student and one for the teacher. Besides, the representations in the two memory banks are updated in a momentum manner, leading to representation inconsistency. In this work, we propose Contrastive Consistent Representation Distillation (CoCoRD) to provide consistent representations for efficient contrastive-learning-based distillation. Instead of momentum-updating the cached representations, CoCoRD updates the encoders in a momentum manner. Specifically, the teacher is equipped with a momentum-updated projection head to generate consistent representations. The teacher representations are cached in a fixed-size queue which serves as the only memory bank in CoCoRD and is significantly smaller than the entire training dataset. Additionally, a slow-moving student, implemented as a momentum-based moving average of the student, is built to facilitate contrastive learning. CoCoRD, which utilizes only one memory bank and much fewer negative keys, provides highly competitive results under typical teacher-student settings. On ImageNet, CoCoRD-distilled ResNet50 outperforms the teacher ResNet101 by 0.2% top-1 accuracy. Furthermore, in PASCAL VOC and COCO detection, the detectors whose backbones are initialized by CoCoRDdistilled models exhibit considerable performance improvements. 1 INTRODUCTION The remarkable performance of convolutional neural networks (CNNs) in various computer vision tasks, such as image recognition (He et al., 2016; Huang et al., 2017) and object detection (Girshick, 2015; Ren et al., 2015; Redmon & Farhadi, 2017), has triggered interest in employing these powerful models beyond benchmark datasets. However, the cutting-edge performance of CNNs is always accompanied by substantial computational costs and storage consumption. Early study has suggested that shallow feedforward networks can approximate arbitrary functions (Hornik et al., 1989). Numerous endeavors have been made to reduce computational overheads and storage burdens. Among those endeavors, Knowledge Distillation, a widely discussed topic, presents a potential solution by training a compact student model with knowledge provided by a cumbersome but well-trained teacher model. The majority of distillation methods induce the student to imitate the teacher representations (Zagoruyko & Komodakis, 2017; Park et al., 2019; Tian et al., 2020; Hinton et al., 2015; Chen et al., 2021b;c; Yim et al., 2017; Tung & Mori, 2019; Ahn et al., 2019). Although representations provide more learning information, the difficulty of defining appropriate metrics to align the student representations to the teacher ones challenges the distillation performance. Besides, failing to capture the dependencies between representation dimensions results in lame performance. To enhance performance, researchers attempt to distill structural knowledge by establishing connections between knowledge distillation and contrastive learning (Tian et al., 2020; Chen et al., 2021b). To efficiently retrieve representations of negative samples for contrastive learning, memory banks cache representations which are updated in a momentum manner, as shown in Fig. 1. However, the student is optimized sharply by the training optimizer. The student representations in the memory bank are inconsistent because the updated representations differ from those not updated in that iteration. Therefore, the student can easily contrast the positive and negative samples, keeping the student from learning good features. The storage size of memory bank is another factor of concern when applying contrastive-learning-based distillation methods. As in (Tian et al., 2020; Chen et al., 2021b), there are two memory banks and each of them contains representations of all training images, leading to massive GPU memory usage on large-scale datasets. Motivated by the discussion above, we propose Contrastive Consistent Representation Distillation (CoCoRD) as a novel way of distilling consistent representations with one fixed-size memory bank. Specifically, CoCoRD is composed of four major components, as shown in Fig. 2: (1) a fixed-size queue which is referred to as the teacher dictionary, (2) a teacher, (3) a student, and (4) a slow-moving student. From a perspective of considering contrastive learning as a dictionary look-up task, the teacher dictionary is regarded as the memory bank, where all the representations serve as the negative keys. The encoded representations of the current batch from the teacher are enqueued. Once the queue is full of representations, the oldest ones are dequeued. By introducing a queue, the size of the memory bank is decoupled from dataset size and batch size, allowing it to be considerably smaller than dataset size and larger than the commonly-used batch size. The student is followed by a projection head, which maps the student features to a representation space. The teacher projection head is initialized the same as the student one and is a momentum moving average of the student projection head if the teacher and the student have the same feature dimension; otherwise, the teacher projection head is randomly initialized and not updated. Since the contrast through the teacher dictionary is to draw distinctions on an instance level, the cached teacher representations which share the same class label as the student ones are mistakenly treated as negative keys, resulting in noise in the dictionary. To alleviate the impact of the noise, a slow-moving student, implemented as a momentum moving average of the student, is proposed to pull together anchor representations and class-positive ones. As shown in Fig. 2, with a momentum-updated projection head, the slow-moving student projects a data augmentation version of the anchor image to the representation space, which serves as the instance-negative but class-positive key. The main contributions are listed as follows: • We utilize only one lightweight memory bank (teacher dictionary), where all the representations are treated as negative keys. We experimentally demonstrate that a miniature teacher dictionary with much fewer negative keys can be sufficient for contrastive learning in knowledge distillation. • We equip the well-trained teacher with a momentum-updated projection head to provide consistent representations for the teacher dictionary. Besides, a slow-moving student provides class-positive representations to alleviate the impact of the potential noise in the teacher dictionary. • We verify the effectiveness of CoCoRD by achieving the state-of-the-art performance in 11 out of 13 student-teacher combinations in terms of model compression. On ImageNet, the CoCoRDdistilled ResNet50 can outperform the teacher ResNet101 by 0.2% top-1 accuracy. Moreover, we initialize the backbones in object detection with CoCoRD-distilled weights and observe considerable performance improvements over the counterparts that the vanilla students initialize. 2 RELATED WORK 2.1 KNOWLEDGE DISTILLATION Hinton et al. (Hinton et al., 2015) first propose distilling the softened logits from the teacher to the student. After the representative work (Hinton et al., 2015), various knowledge distillation methods (Wang & Li, 2021; Song et al., 2021; Passban et al., 2020; Chen et al., 2021a;c) aim to distill more informative knowledge via intermediate features. Among them, Passban et al. (Passban et al., 2020) fuse all teacher information to avoid the loss of significant knowledge. Chen et al. (Chen et al., 2021a) propose semantic calibration based on the attention mechanism for adaptively assigning cross-layer knowledge. Chen et al. (Chen et al., 2021c) introduce a novel framework via knowledge review in which the knowledge of multiple layers in the teacher can be distilled for supervising one layer of the student. However, the methods mentioned above have difficulty in defining appropriate metrics to measure the distance between the student representations and the counterparts from the teacher. There are a few recent works exploiting the dependencies between representation dimensions based on contrastive learning (Tian et al., 2020; Chen et al., 2021b) for boosting the distillation performance. In particular, Tian et al. (Tian et al., 2020) formulate capturing structural knowledge as contrastive learning and maximize the lower bound of mutual information between the teacher and the student. Chen et al. (Chen et al., 2021b) leverage primal and dual forms of Wasserstein distance, where the dual form yields a contrastive learning objective. In summary, the core of knowledge distillation lies in the definition of knowledge and the way the knowledge is distilled. 2.2 CONTRASTIVE LEANING The main goal of contrastive learning is to learn a representation space where anchor representations stay close to the representations of the positive samples and distant from those of the negative samples. Contrastive learning is a powerful approach in self-supervised learning. To learn powerful feature representations in an unsupervised fashion, Wu et al. (Wu et al., 2018) consider each instance as a distinct class of its own and use noise contrastive estimation (NCE) to tackle the computational challenges. Contrastive learning is first combined with knowledge distillation by CRD (Tian et al., 2020), which aims at exploring structural knowledge. In addition to CRD (Tian et al., 2020), WCoRD (Chen et al., 2021b) combines LCKT (Chen et al., 2021b) and GCKT (Chen et al., 2021b) based on Wasserstein dependency measure in contrastive learning (Ozair et al., 2019). However, the memory banks in CRD and WCoRD contain representations of all the training images, which bring about storage challenges on large-scale datasets. Besides, momentum updates to representations can also lead to inconsistent representations that negatively affect the distillation performance. From a perspective of considering contrastive learning as a dictionary lookup task, we implement the memory bank as a first-in-first-out queue where all included representations serve as negative keys. 3 METHOD The key idea of combining knowledge distillation with contrastive learning is straightforward. With knowledge distillation, a proficient teacher can provide consistent representations that are beneficial for contrastive learning. With contrastive learning, the student can obtain powerful features whose representations are close to the positive teacher representations and distant from the negative ones in a representation space. Contrastive learning can be generally formulated as a dictionary look-up task. Given a query q and a dictionary K with N keys: K = {k1, · · · , kN}, contrastive learning matches the query q to the positive key k+and pushes q away from the negative keys cached in K. 3.1 CONTRAST AS LOOKING UP IN THE TEACHER DICTIONARY In CoCoRD, the negative keys are encoded by the teacher and cached in a fixed-size queue which is referred to as the teacher dictionary. Given an input image x, two views of x under random data augmentations form a positive pair (a query and a positive sample), which is encoded in each iteration. We define the input to the student S as the query xs and the input to the teacher T as the positive sample xt. The outputs at the penultimate layer (before the last fully-connected layer) are projected to a representation space by a projection head. For simplicity of notation, the student nested functions up to the penultimate layer are denoted as gs(·) and the student projection head is denoted as fps (·). Therefore, the query representations qs and the positive keys k+t are given by: qs = f p s (gs(xs)), k + t = f p t (gt(xt)), (1) where gt(·) denotes the teacher nested functions up the penultimate layer and fpt (·) is the teacher projection head. fps and f p t are two-layer perceptrons. Besides, the cached i-th negative key in the queue is denoted as k-ti which is produced the same way as k + t but from the preceding batches. The fixed-size teacher dictionary K={k-t1 , · · · , k - tN } contains N negative keys. The representations of the current batch are added to the queue, while the oldest representations are removed from the queue. The contrastive loss. The value of the contrastive loss should be small when qs is close to k+t and distant from k-ti in the representation space. To meet this condition, we consider the wildly-used and effective contrastive loss function: InfoNCE (Van den Oord et al., 2018): Lctr = − log exp(qs · k+t /τ) exp(qs · k+t /τ) + ∑N i=1 exp(qs · k-ti/τ) , (2) where τ is a hyper-parameter that controls the concentration level. N is the size of the teacher dictionary. Lctr can be intuitively interpreted as the log loss of a softmax-based (N+1)-way classification task. In our case, we attempt to classify qs as k+t in the scope of {k+t } ∪ {k-t1 , k - t2 , · · · , k - tN }. The consistency in the teacher dictionary. The introduction of the fixed-size teacher dictionary decouples the size of the memory bank from batch size and dataset size. The teacher dictionary can be larger than the commonly-used batch and smaller than the dataset. Therefore, we bypass huge batch, which aims at providing in-batch negative samples. Besides, we can avoid sampling inconsistent negative keys from the memory bank. The core to learning good features by contrastive learning lies in the rich and challenging negative representations. In CRD (Tian et al., 2020) and WCoRD (Chen et al., 2021b), the negative keys are momentum updated. The momentum update to the negative keys brings about two main issues: (1) the negative keys were updated only when they were last processed, and (2) the update interval for each negative key can be highly different. The two issues cause inconsistent negative keys. To provide consistent negative keys, we update the teacher projection head in a momentum manner. Since gt are frozen in the distillation framework, momentum updating the teacher projection head results in consistent negative keys. Specifically, denoting the parameters of fpt as ωt and those of f p s as ωs, we update ωt as: ωt ← mcωt + (1−mc)ωs. (3) mc ∈ [0, 1] is a momentum coefficient which adjusts the update smoothness. Since ωs are optimized by the training optimizer, the momentum update of ωt makes the teacher projection head f p t progress more smoothly than the student projection head fps . Therefore, the difference between the teacher projection heads at different iterations can be made small. As a result, the negative keys encoded at different iterations can be consistent. Besides, the teacher dictionary itself is gradually updated. The representations of the current batch are enqueued, while the representations of the oldest batch are dequeued. This gradual replacement is beneficial for maintaining the consistency of the queue since the oldest representations are the least consistent with the current ones. 3.2 REPRESENTATIONS OF ONE CLASS FLOCK TOGETHER As shown in Eq. 2, classifying qs as k+t in the scope of {k+t , k-t1 , k - t2 , · · · , k - tn} is a discrimination on an instance level. However, k-ti which shares the same class label with qs should be close to qs in the representation space. Simply rejecting those k-ti is not beneficial for the student learning good features. To bring qs closer to its instance-negative but class-positive keys, we introduce a slow-moving student whose nested functions up to the penultimate layer are denoted as g′s(·). Specifically, the slow-moving student is implemented as a momentum moving average of the student. The slow-moving student is also accompanied with a projection head f ′s, which is also updated in a momentum manner. Denoting the parameters of gs as θs, the parameters of g′s as θ ′ s and those of f ′ s as w ′ s, we update θ ′ s and w ′ s by: θ′s ← mrθ′s + (1−mr)θs w′s ← mrw′s + (1−mr)ws, (4) where mr ∈ [0, 1] is another momentum coefficient and ws denotes the parameters of the student projection head fps . Therefore, the instance-negative but class-positive keys q + s- can be obtain by: q+s- = f ′ s(g ′ s(x ′ s)), ◁ the instance-negative but class-positive keys (5) where x′s is another view of x under the random data augmentations. Instead of directly narrowing down the distance between qs and q+s-, we use qs to predict q + s-, which softens the constraint. Formally, a predictor hs, implemented as a two-layer perceptron, is proposed to produce the prediction ps ≜ hs(qs). The loss is simply defined as the mean squared error between l2 normalized ps and q+s-: Lpred = ∥ ps ∥ps∥2 − q + s∥q+s-∥2 ∥22 = 2− 2⟨ ps ∥ps∥2 , q+s∥q+s-∥2 ⟩. (6) Furthermore, we symmetrize the loss by feeding x′s to the student and xs to the slow-moving student to compute L̃pred. Formally, denoting the representations output from x′s by the student as q̃s and the corresponding instance-negative but class-positive keys as q̃+s-, we compute L̃pred by: L̃pred = ∥ p̃s ∥p̃s∥2 − q̃ + s∥q̃+s-∥2 ∥22 = 2− 2⟨ p̃s ∥p̃s∥2 , q̃+s∥q̃+s-∥2 ⟩. (7) Here p̃s ≜ hs(q̃s), q̃+s- ≜ f ′ s(g ′ s(xs)) and q̃s ≜ f p s (gs(x ′ s)). Note that q + s- and q̃ + s- are detached from the current computational graph during the distillation process. 3.3 TRAINING THE STUDENT With the slow-moving student and the teacher, Eq. 2, Eq. 6 and Eq. 7 aim at assisting the student to effectively learn powerful features through contrastive learning. The student still needs to learn features from the training data. For image classification, the task-specific loss is defined as the cross-entropy loss. Overall, the total loss Ltotal can be formulated as: Ltotal = λctrLctr + λpred(Lpred + L̃pred) + λclsLcls, (8) where λctr, λpred and λcls are three balancing factors. Lcls ≜ H(y, ys), where H(·) refers to the standard cross-entropy, y denotes the one-hotel label and ys is the student output. 4 EXPERIMENTS We validate the effectiveness of CoCoRD in improving the student performance. The student-teacher combinations are divided into two main categories: (1) students share the same architecture style with teachers, and (2) the architectures of the students are different from those of the teachers. Datasets. To investigate the performance improvements of students, we employ two benchmarks: (1) CIFAR100 (Krizhevsky et al., 2009) and (2) ImageNet-1K (Russakovsky et al., 2015). CIFAR100 has 100 classes and there are 500 training images and 100 validation images per class. ImageNet-1K, a large-scale dataset, contains 1000 classes and provides 1.28 million training images and 50K validation images. To test the transferability of features that students learn by CoCoRD, we utilize two more datasets: (1) STL-10 (Coates et al., 2011) and (2) TinyImageNet (Chrabaszcz et al., 2017). We only use the 5K labeled training images and 8K validation images from 10 classes in STL-10. TinyImageNet consists of 200 classes and each has 500 training images and 50 validation images. 4.1 EXPERIMENTS ON CIFAR100 We experiment on CIFAR100 with 13 student-teacher combinations in total1, 7 of which are studentteacher combinations with the same architecture style, and the remaining 6 are student-teacher combinations with different architectures. Table 1 focuses on student-teacher combinations with the same architecture style, while Table 2 provides experimental results of student-teacher combinations with different architectures. As can be observed in both tables, KD (Hinton et al., 2015), a simple yet effective method, provides a strong baseline. CoCoRD can consistently outperform KD and 1On CIFAR100, λctr=1, λcls=1, λpred=4. More training details and data augmentations are provided in the supplementary materials achieve highly competitive performance compared with other state-of-the-art methods. Note that mc in Formula 3 is set to 1 for the WRN-40-2/WRN-40-1 combination. Although the teacher projection head attached to WRN-40-2 is only randomly initialized and not updated during the distillation process, CoCoRD still achieves the state-of-the-art result. This implies the features provided by the well-trained teacher from the penultimate layer are already distinguishing, which are then projected into the representation space by the frozen teacher projection head fpt . Based on the discussion above, the teacher projection heads in Table 2 are randomly initialized since the difference in architecture style is very likely to bring about the difference in the input shape. Note that it is because of the projection heads that CoCoRD can achieve distillation under cross-architecture setting. The projection heads can project features at the penultimate layer of different shapes into one representation space, where we can easily define the contrastive loss based on Eq. 2. As shown in Table 2, CoCoRD is highly effective for combinations of different architectures. Even if the teacher projection head is not updated, CoCoRD can consistently achieve the best performance compared to those not combined with another method. Especially, for the resnet-32x4/ShuffleNetV2 pair, CoCoRD presents 77.28% Top-1 accuracy, which is 1.5% higher than the second best GCKT (75.78%). On the other hand, methods based on intermediate features perform poorly with differentarchitecture combinations. The observation suggests that CoCoRD can largely blur the requirement for significant similarities between students and teachers. We conjecture that knowledge distillation based on features at the penultimate layer can avoid the conflicts of different inductive biases that different models exploit. This indicates that the proposed CoCoRD is more generally applicable for student-teacher combinations with different architectures. Limitations. In Table 1, CoCoRD+KD does not bring further performance improvements over CoCoRD. The same phenomenon can be observed in Table 2. MobileNetV2 (Sandler et al., 2018) does not obtain more performance improvements with CoCoRD+KD. These phenomena indicate that further investigations are needed to combine CoCoRD with other knowledge distillation methods and extremely lightweight student models are still challenging for knowledge distillation. Linear probing. Following CRD (Tian et al., 2020), we employ linear probing to evaluate the transferability of the student features. We freeze the student and train a linear classifier on the global average pooling features of the student to perform a 10-way classification on STL10 and 200-way classification on TinyImageNet. As shown in Table. 3, CoCoRD exhibits strong transferability and outperform the second best (CRD+KD) by a large margin on the two datasets (2.04% improvement on STL10 and 2.32% on TinyImageNet). The proposed CoCoRD, which has a negligible performance drop on CIFAR100 compared with the teacher, (Please see Table. 1), shows better transferability than the teacher (5.32% improvement on STL10 and 6.01% TinyImageNet). The linear probing experiment indicates that CoCoRD-distilled models have better generalization ability. 4.2 EXPERIMENTS ON IMAGENET To investigate the scalability of CoCoRD to large-scale datasets, we employ ResNet-18 and ResNet-34 as the student-teacher combination to perform experiments on ImageNet-1K. For a fair comparison, we follow the standard PyTorch ImageNet training practice except that we have 100 training epochs like CRD and WCoRD. We also use the PyTorch-released ResNet-34/18 as our teacher/student. On ImageNet, we set λctr=1, λcls=1, λpred=4 and only calculate Lpred. The Top-1 and Top-5 error rates of different distillation methods are provided in Table 4 (the lower, the better). The results in Table 4 show that the proposed CoCoRD achieves the best performance on the large-scale ImageNet. The relative improvement of CoCoRD over WCoRD (Chen et al., 2021b) on Top-1 error is 14.45%, and the relative improvement of CoCoRD over CRD (Tian et al., 2020) on Top-1 error is 40.43%. Both improvements validate the scalability of the proposed CoCoRD to large-scale datasets. 4.3 ABLATION STUDY 4.3.1 STUDY OF ENCODER COMBINATIONS By default, we use the teacher to generate representations for contrastive learning and the slowmoving student is employed to produce representations of another view of the input of the student. To investigate how the representation quality affects the distillation performance, we utilize different models to provide those representations. For clarity, the model that generates dictionary-caching representations is referred to as contrastive encoder. The model that produces the instance-negative but class-positive representations is referred to as cognate encoder. Results are reported in Table 5. Comparing options A (the default option) and B, we can find that leveraging the pre-trained teacher to provide quality representations for contrastive learning is more beneficial for the distillation. Besides, removing the cognate encoder and setting λpred to zero (option E) lead to poor performance, suggesting the cognate encoder can alleviate the adverse impact of the potential noise. If we remove the contrastive encoder and still use the dictionary with cognate encoder (option F), the distillation process fails. The results in Table 5 can support the effectiveness of each encoder in CoCoRD. 4.3.2 STUDY OF MOMENTUM As shown in Formulas 3 and 4, mc controls the progressing speed of the teacher projection head f p t , while mr manages the speed of the slow-moving student and its projection head. To investigate the impact of momentum, we employ resnet110 as the teacher to train resnet32 with different mc and mr. The results are reported in Table 6. When mc=mr=0 and mc=mr=1, CoCoRD can improve the student performance. The effectiveness in both cases implies CoCoRD is robust. Besides, with mr fixed, a large value of mr (e.g. 0.99 or 0.999) works much better than mr=0, suggesting that consistent representations in the teacher dictionary are beneficial for the distillation. 4.3.3 STUDY OF HYPER-PARAMETERS The temperature τ . The value of τ in Eq. 2 varies from 0.07 to 0.11. As shown in Figure 3(a), CoCoRD is sensitive to τ . Both extremely high and low temperature lead to sub-optimal performance. As suggested in CRD (Tian et al., 2020), we set τ to 0.1 for experiments on CIFAR100, while τ is set to 0.07 on ImageNet. We suggest tuning the value of τ based on the classification difficulty. The size of the teacher dictionary. The number of negative keys is determined by the teacher dictionary size N . To investigate the effects of the teacher dictionary size, we validate various values of N . As shown in Figure 3(b), extremely small teacher dictionary provides insufficient negative keys, leading to sub-optimal performance. However, the extremely large teacher dictionary can introduce noise, which adversely affects the distillation performance. Based on our experiments, N=2048 should suffice on CIFAR100 while N= 65536 on ImageNet. Note that the teacher dictionary in CoCoRD is significantly smaller than the memory banks in CRD (Tian et al., 2020) and WCoRD (Chen et al., 2021b), which is more economic for large-scale datasets. The balancing factors. We conduct experiments on CIFAR100 to investigate the effects of the three balancing factors λctr, λcls and λpred. We use resnet32resnet110 as the student-teacher combination. For experiments on balancing factors, we set τ=0.1, N=2048, mc=0.999 and mr=0.9. “%” denotes we set the balance factor to 0 and “!” means we set the balance factor to the corresponding value provided in the second row. Details on simple grid search for each balancing factor can be found in the supplementary material. As we can see from Table. 7, all components in CoCoRD are essential for achieving high distillation performance. When λctr is set to 0, there is a serious performance drop, which indicates contrasting student representa- tions with negative keys in teacher dictionary is necessary in improving the student performance. Moreover, by comparing the result when λpred=0 with the result when λpred=4, we can see that the slow-moving student can reduce the negative effects of the noise in the teacher dictionary. 4.4 TRANSFER LEARNING We further validate the feature quality of CoCoRD-distilled models by transferring the model weights to object detection task, including PASCAL VOC (Everingham et al., 2010) and COCO detection (Lin et al., 2014). We fine-tune the pre-trained models in an end-to-end manner on the target datasets. The detector for PASCAL VOC is Faster R-CNN (Ren et al., 2015) with a backbone of R50-C4. For COCO object detection, the model is Mask-RCNN (He et al., 2017) with the R50-C4 backbone. Note that the CoCoRD-distilled ResNet50 can outperform the teacher ResNet101 by 0.2% top-1 accuracy on classification. As shown in Table 8, the CoCoRD-initialized detectors exhibit better performance than the student-initialized and CRD-initialized counterparts. The valid reuse of model weights further demonstrates the transferability of CoCoRD-distilled features. 5 CONCLUSION In this paper, we propose a contrastive-learning-based knowledge distillation method named Contrastive Consistent Representation Distillation. From a perspective of regarding contrastive learning as a dictionary looking-up task, we build a fixed-size dictionary to cache consistent teacher representations. Besides, to alleviate the adverse impact of the potential noise in the teacher dictionary, we employ a slow-moving student, implemented as a momentum-based moving average of the student, to provide instance-negative but class-positive targets. CoCoRD does not employ the entire dataset as the memory bank, which is economic for large-scale datasets. Extensive experiments demonstrate that CoCoRD, which utilizes fewer negative keys, can boost the performance of the students on diverse image classification datasets. Additionally, the models distilled by CoCoRD on ImageNet classification can efficiently improve object detection performance on PASCAL VOC and COCO. A APPENDIX A.1 QUANTITATIVE RESULTS ON THE ACHIEVED SPEED-UP, MEMORY REDUCE AND OTHERS In the following three tables, we provide quantitative results on the achieved speed-up, memory cost reduce, and other quantitative information about the teacher/student (T/S) combinations used on CIFAR100 (in Tabs. 1 and 2) and those T/S combinations used on ImageNet (Tabs. 4 and 8). The results are measured with Intel Core i7-8700 CPU on Ubuntu 20.04 operating system and memory cost is measured by Pytorch Profiler in a forward pass. Additionally, we compare the size of the teacher dictionary in the proposed CoCoRD with the size of the memory banks in CRD. Note that the keys in CRD memory banks are only 128-d while the keys in the proposed CoCoRD teacher dictionary are 2048-d. Even with higher dimensions of the stored keys, CoCoRD are still more storage efficient. A.2 THEORETICAL STUDY Given two deep neural networks, a teacher fT and a student fS , and let x be the network input. We denote representations at the penultimate layer as fT (x) and fS(x), respectively. We would like to bring fS(xi) and fT (xi) close while pushing apart fS(xi) and fT (xj) (xi and xj represent different training samples). For clear notation, we define variables S and T for the student representations and the teacher ones of the data, respectively: x ∼ p(x); S = fS(x); T = fT (x). Let us define a distribution q with variable C. The latent variable C decides whether the tuple (fS(xi), fT (xj)) is drawn form the joint distribution p(T, S) (when C=1) or drawn from the product of marginal distributions p(S)p(T ) (when C=0). q(T, S|C = 1) = p(T, S), q(T, S|C = 0) = p(T )p(S) Suppose we are given 1 congruent pair drawn from the joint distribution (i.e. the same input provided to T and S) for every N incongruent pairs drawn from the product of marginals (independent randomly inputs provided to T and S). Then the priors on the latent C are: q(C = 1) = 1 N + 1 , q(C = 0) = N N + 1 . By Bayes’ rule and simple manipulations, the posterior for C = 1 is given by: q(C = 1|T, S) = q(T, S|C = 1)q(C = 1) q(T, S|C = 0)q(C = 0) + q(T, S|C = 1)q(C = 1) = p(T, S) p(T, S) +Np(T )p(S) . We can observe a connection with mutual information: log q(C = 1|T, S) = − log(1 +N p(T )p(S) p(T, S) ) ≤ − log(N) + log p(T, S) p(T )p(S) . Taking expectation on both sides w.r.t p(T, S) and rearranging gives us: I(T ;S) ≥ log(N) + Eq(T,S|C=1) log q(C = 1|T, S), where I(T ;S) is the mutual information between the distributions of the teacher and student representations. Though we do not know the true distribution q(C = 1|T, S), a neural network can be used to estimate whether a pair comes from the joint distribution or the marginals. By maximizing KL divergence between the joint distribution p(T, S) and the product of marginal distributions p(T )p(S), we can maximize the mutual information between the student representations and the teacher representations.
1. What are the key contributions and strengths of the paper regarding contrastive representation distillation? 2. What are the weaknesses and limitations of the proposed approach compared to prior works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What motivates the use of separate heads for the teacher and student models, and how does it improve performance? 5. Can you provide more explanation and justification for the choice of using a slow-moving student model in Section 3.2? 6. How does the notation used in Equation 1 refer to the input to the student model, and what is the meaning of the index r? 7. Why do inconsistent representations in the memory bank adversely affect the student model performance, and how does the proposed method mitigate this issue?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper introduces and mitigates some challenges encountered in contrastive representation distillation. In particular, the authors point out that the existing methods (CRD and wCoRD) keep track of a memory bank for the negatives that often contain inconsistent representations, which can adversely affect the student model performance. To that end, the paper proposes using separate heads for the teacher and student models where the teacher head is updated using the student head using momentum, allowing the teacher representations to be consistent across different iterations. As a side benefit, the proposed model requires a smaller teacher dictionary than previous works. The paper also proposes reducing the euclidean distance between samples referred to as "instance-negative class positive" samples using an additional slowly moving student model. Several experiments show highly competitive results in teacher-student settings. Strengths And Weaknesses Strengths The proposed method is evaluated on several sets of vision tasks including classification, transferability, and object detection. The performance of the method looks promising with highly competitive results. The effectiveness of the proposed method is shown in the experiments and ablations. The paper also contains comparisons with relevant baselines from recent years. The paper generally reads well and is clear for the most part. Weaknesses While the results seem good, I find the novelty in this work limited. Most of the ideas (like momentum update of params) already exist and are known in the self-supervised learning and contrastive learning literature (though these ideas have not been applied in knowledge distillation). The motivation for section 3.2 is unclear. Mainly, the choice of using a slow-moving student model is not motivated or discussed. Furthermore, it is unclear how using a new view of the same input image ensures q r to become closer to "instance-negative but class-positive keys". Other Minor comments: I find the notation to be a little unclear. In equation 1, what does the index r in q r refer to? It has not been introduced before. Is it supposed to be s instead, since the input to the student model was x s ? It was mentioned in the introduction that when the representations are inconsistent, "the student can easily contrast positive and negative samples", why is that? In the introduction, Page 1 (last para, 3rd line): "The student representations in the memory" shouldn't it be "teacher representations" instead? Clarity, Quality, Novelty And Reproducibility Clarity: Overall, the paper is clear and easy to read, with a few exceptions, particularly with respect to section 3.2 and the notations used. Quality & Novelty: The quality in terms of the presentation is good. As mentioned above the novelty is limited with respect to the proposed methodology. Reproducibility: The procedure of the proposed method is adequately described. The code was not provided as part of the supplementary.
ICLR
Title Contrastive Consistent Representation Distillation Abstract The combination of knowledge distillation with contrastive learning has great potential to distill structural knowledge. Most of the contrastive-learning-based distillation methods treat the entire training dataset as the memory bank and maintain two memory banks, one for the student and one for the teacher. Besides, the representations in the two memory banks are updated in a momentum manner, leading to representation inconsistency. In this work, we propose Contrastive Consistent Representation Distillation (CoCoRD) to provide consistent representations for efficient contrastive-learning-based distillation. Instead of momentum-updating the cached representations, CoCoRD updates the encoders in a momentum manner. Specifically, the teacher is equipped with a momentum-updated projection head to generate consistent representations. The teacher representations are cached in a fixed-size queue which serves as the only memory bank in CoCoRD and is significantly smaller than the entire training dataset. Additionally, a slow-moving student, implemented as a momentum-based moving average of the student, is built to facilitate contrastive learning. CoCoRD, which utilizes only one memory bank and much fewer negative keys, provides highly competitive results under typical teacher-student settings. On ImageNet, CoCoRD-distilled ResNet50 outperforms the teacher ResNet101 by 0.2% top-1 accuracy. Furthermore, in PASCAL VOC and COCO detection, the detectors whose backbones are initialized by CoCoRDdistilled models exhibit considerable performance improvements. 1 INTRODUCTION The remarkable performance of convolutional neural networks (CNNs) in various computer vision tasks, such as image recognition (He et al., 2016; Huang et al., 2017) and object detection (Girshick, 2015; Ren et al., 2015; Redmon & Farhadi, 2017), has triggered interest in employing these powerful models beyond benchmark datasets. However, the cutting-edge performance of CNNs is always accompanied by substantial computational costs and storage consumption. Early study has suggested that shallow feedforward networks can approximate arbitrary functions (Hornik et al., 1989). Numerous endeavors have been made to reduce computational overheads and storage burdens. Among those endeavors, Knowledge Distillation, a widely discussed topic, presents a potential solution by training a compact student model with knowledge provided by a cumbersome but well-trained teacher model. The majority of distillation methods induce the student to imitate the teacher representations (Zagoruyko & Komodakis, 2017; Park et al., 2019; Tian et al., 2020; Hinton et al., 2015; Chen et al., 2021b;c; Yim et al., 2017; Tung & Mori, 2019; Ahn et al., 2019). Although representations provide more learning information, the difficulty of defining appropriate metrics to align the student representations to the teacher ones challenges the distillation performance. Besides, failing to capture the dependencies between representation dimensions results in lame performance. To enhance performance, researchers attempt to distill structural knowledge by establishing connections between knowledge distillation and contrastive learning (Tian et al., 2020; Chen et al., 2021b). To efficiently retrieve representations of negative samples for contrastive learning, memory banks cache representations which are updated in a momentum manner, as shown in Fig. 1. However, the student is optimized sharply by the training optimizer. The student representations in the memory bank are inconsistent because the updated representations differ from those not updated in that iteration. Therefore, the student can easily contrast the positive and negative samples, keeping the student from learning good features. The storage size of memory bank is another factor of concern when applying contrastive-learning-based distillation methods. As in (Tian et al., 2020; Chen et al., 2021b), there are two memory banks and each of them contains representations of all training images, leading to massive GPU memory usage on large-scale datasets. Motivated by the discussion above, we propose Contrastive Consistent Representation Distillation (CoCoRD) as a novel way of distilling consistent representations with one fixed-size memory bank. Specifically, CoCoRD is composed of four major components, as shown in Fig. 2: (1) a fixed-size queue which is referred to as the teacher dictionary, (2) a teacher, (3) a student, and (4) a slow-moving student. From a perspective of considering contrastive learning as a dictionary look-up task, the teacher dictionary is regarded as the memory bank, where all the representations serve as the negative keys. The encoded representations of the current batch from the teacher are enqueued. Once the queue is full of representations, the oldest ones are dequeued. By introducing a queue, the size of the memory bank is decoupled from dataset size and batch size, allowing it to be considerably smaller than dataset size and larger than the commonly-used batch size. The student is followed by a projection head, which maps the student features to a representation space. The teacher projection head is initialized the same as the student one and is a momentum moving average of the student projection head if the teacher and the student have the same feature dimension; otherwise, the teacher projection head is randomly initialized and not updated. Since the contrast through the teacher dictionary is to draw distinctions on an instance level, the cached teacher representations which share the same class label as the student ones are mistakenly treated as negative keys, resulting in noise in the dictionary. To alleviate the impact of the noise, a slow-moving student, implemented as a momentum moving average of the student, is proposed to pull together anchor representations and class-positive ones. As shown in Fig. 2, with a momentum-updated projection head, the slow-moving student projects a data augmentation version of the anchor image to the representation space, which serves as the instance-negative but class-positive key. The main contributions are listed as follows: • We utilize only one lightweight memory bank (teacher dictionary), where all the representations are treated as negative keys. We experimentally demonstrate that a miniature teacher dictionary with much fewer negative keys can be sufficient for contrastive learning in knowledge distillation. • We equip the well-trained teacher with a momentum-updated projection head to provide consistent representations for the teacher dictionary. Besides, a slow-moving student provides class-positive representations to alleviate the impact of the potential noise in the teacher dictionary. • We verify the effectiveness of CoCoRD by achieving the state-of-the-art performance in 11 out of 13 student-teacher combinations in terms of model compression. On ImageNet, the CoCoRDdistilled ResNet50 can outperform the teacher ResNet101 by 0.2% top-1 accuracy. Moreover, we initialize the backbones in object detection with CoCoRD-distilled weights and observe considerable performance improvements over the counterparts that the vanilla students initialize. 2 RELATED WORK 2.1 KNOWLEDGE DISTILLATION Hinton et al. (Hinton et al., 2015) first propose distilling the softened logits from the teacher to the student. After the representative work (Hinton et al., 2015), various knowledge distillation methods (Wang & Li, 2021; Song et al., 2021; Passban et al., 2020; Chen et al., 2021a;c) aim to distill more informative knowledge via intermediate features. Among them, Passban et al. (Passban et al., 2020) fuse all teacher information to avoid the loss of significant knowledge. Chen et al. (Chen et al., 2021a) propose semantic calibration based on the attention mechanism for adaptively assigning cross-layer knowledge. Chen et al. (Chen et al., 2021c) introduce a novel framework via knowledge review in which the knowledge of multiple layers in the teacher can be distilled for supervising one layer of the student. However, the methods mentioned above have difficulty in defining appropriate metrics to measure the distance between the student representations and the counterparts from the teacher. There are a few recent works exploiting the dependencies between representation dimensions based on contrastive learning (Tian et al., 2020; Chen et al., 2021b) for boosting the distillation performance. In particular, Tian et al. (Tian et al., 2020) formulate capturing structural knowledge as contrastive learning and maximize the lower bound of mutual information between the teacher and the student. Chen et al. (Chen et al., 2021b) leverage primal and dual forms of Wasserstein distance, where the dual form yields a contrastive learning objective. In summary, the core of knowledge distillation lies in the definition of knowledge and the way the knowledge is distilled. 2.2 CONTRASTIVE LEANING The main goal of contrastive learning is to learn a representation space where anchor representations stay close to the representations of the positive samples and distant from those of the negative samples. Contrastive learning is a powerful approach in self-supervised learning. To learn powerful feature representations in an unsupervised fashion, Wu et al. (Wu et al., 2018) consider each instance as a distinct class of its own and use noise contrastive estimation (NCE) to tackle the computational challenges. Contrastive learning is first combined with knowledge distillation by CRD (Tian et al., 2020), which aims at exploring structural knowledge. In addition to CRD (Tian et al., 2020), WCoRD (Chen et al., 2021b) combines LCKT (Chen et al., 2021b) and GCKT (Chen et al., 2021b) based on Wasserstein dependency measure in contrastive learning (Ozair et al., 2019). However, the memory banks in CRD and WCoRD contain representations of all the training images, which bring about storage challenges on large-scale datasets. Besides, momentum updates to representations can also lead to inconsistent representations that negatively affect the distillation performance. From a perspective of considering contrastive learning as a dictionary lookup task, we implement the memory bank as a first-in-first-out queue where all included representations serve as negative keys. 3 METHOD The key idea of combining knowledge distillation with contrastive learning is straightforward. With knowledge distillation, a proficient teacher can provide consistent representations that are beneficial for contrastive learning. With contrastive learning, the student can obtain powerful features whose representations are close to the positive teacher representations and distant from the negative ones in a representation space. Contrastive learning can be generally formulated as a dictionary look-up task. Given a query q and a dictionary K with N keys: K = {k1, · · · , kN}, contrastive learning matches the query q to the positive key k+and pushes q away from the negative keys cached in K. 3.1 CONTRAST AS LOOKING UP IN THE TEACHER DICTIONARY In CoCoRD, the negative keys are encoded by the teacher and cached in a fixed-size queue which is referred to as the teacher dictionary. Given an input image x, two views of x under random data augmentations form a positive pair (a query and a positive sample), which is encoded in each iteration. We define the input to the student S as the query xs and the input to the teacher T as the positive sample xt. The outputs at the penultimate layer (before the last fully-connected layer) are projected to a representation space by a projection head. For simplicity of notation, the student nested functions up to the penultimate layer are denoted as gs(·) and the student projection head is denoted as fps (·). Therefore, the query representations qs and the positive keys k+t are given by: qs = f p s (gs(xs)), k + t = f p t (gt(xt)), (1) where gt(·) denotes the teacher nested functions up the penultimate layer and fpt (·) is the teacher projection head. fps and f p t are two-layer perceptrons. Besides, the cached i-th negative key in the queue is denoted as k-ti which is produced the same way as k + t but from the preceding batches. The fixed-size teacher dictionary K={k-t1 , · · · , k - tN } contains N negative keys. The representations of the current batch are added to the queue, while the oldest representations are removed from the queue. The contrastive loss. The value of the contrastive loss should be small when qs is close to k+t and distant from k-ti in the representation space. To meet this condition, we consider the wildly-used and effective contrastive loss function: InfoNCE (Van den Oord et al., 2018): Lctr = − log exp(qs · k+t /τ) exp(qs · k+t /τ) + ∑N i=1 exp(qs · k-ti/τ) , (2) where τ is a hyper-parameter that controls the concentration level. N is the size of the teacher dictionary. Lctr can be intuitively interpreted as the log loss of a softmax-based (N+1)-way classification task. In our case, we attempt to classify qs as k+t in the scope of {k+t } ∪ {k-t1 , k - t2 , · · · , k - tN }. The consistency in the teacher dictionary. The introduction of the fixed-size teacher dictionary decouples the size of the memory bank from batch size and dataset size. The teacher dictionary can be larger than the commonly-used batch and smaller than the dataset. Therefore, we bypass huge batch, which aims at providing in-batch negative samples. Besides, we can avoid sampling inconsistent negative keys from the memory bank. The core to learning good features by contrastive learning lies in the rich and challenging negative representations. In CRD (Tian et al., 2020) and WCoRD (Chen et al., 2021b), the negative keys are momentum updated. The momentum update to the negative keys brings about two main issues: (1) the negative keys were updated only when they were last processed, and (2) the update interval for each negative key can be highly different. The two issues cause inconsistent negative keys. To provide consistent negative keys, we update the teacher projection head in a momentum manner. Since gt are frozen in the distillation framework, momentum updating the teacher projection head results in consistent negative keys. Specifically, denoting the parameters of fpt as ωt and those of f p s as ωs, we update ωt as: ωt ← mcωt + (1−mc)ωs. (3) mc ∈ [0, 1] is a momentum coefficient which adjusts the update smoothness. Since ωs are optimized by the training optimizer, the momentum update of ωt makes the teacher projection head f p t progress more smoothly than the student projection head fps . Therefore, the difference between the teacher projection heads at different iterations can be made small. As a result, the negative keys encoded at different iterations can be consistent. Besides, the teacher dictionary itself is gradually updated. The representations of the current batch are enqueued, while the representations of the oldest batch are dequeued. This gradual replacement is beneficial for maintaining the consistency of the queue since the oldest representations are the least consistent with the current ones. 3.2 REPRESENTATIONS OF ONE CLASS FLOCK TOGETHER As shown in Eq. 2, classifying qs as k+t in the scope of {k+t , k-t1 , k - t2 , · · · , k - tn} is a discrimination on an instance level. However, k-ti which shares the same class label with qs should be close to qs in the representation space. Simply rejecting those k-ti is not beneficial for the student learning good features. To bring qs closer to its instance-negative but class-positive keys, we introduce a slow-moving student whose nested functions up to the penultimate layer are denoted as g′s(·). Specifically, the slow-moving student is implemented as a momentum moving average of the student. The slow-moving student is also accompanied with a projection head f ′s, which is also updated in a momentum manner. Denoting the parameters of gs as θs, the parameters of g′s as θ ′ s and those of f ′ s as w ′ s, we update θ ′ s and w ′ s by: θ′s ← mrθ′s + (1−mr)θs w′s ← mrw′s + (1−mr)ws, (4) where mr ∈ [0, 1] is another momentum coefficient and ws denotes the parameters of the student projection head fps . Therefore, the instance-negative but class-positive keys q + s- can be obtain by: q+s- = f ′ s(g ′ s(x ′ s)), ◁ the instance-negative but class-positive keys (5) where x′s is another view of x under the random data augmentations. Instead of directly narrowing down the distance between qs and q+s-, we use qs to predict q + s-, which softens the constraint. Formally, a predictor hs, implemented as a two-layer perceptron, is proposed to produce the prediction ps ≜ hs(qs). The loss is simply defined as the mean squared error between l2 normalized ps and q+s-: Lpred = ∥ ps ∥ps∥2 − q + s∥q+s-∥2 ∥22 = 2− 2⟨ ps ∥ps∥2 , q+s∥q+s-∥2 ⟩. (6) Furthermore, we symmetrize the loss by feeding x′s to the student and xs to the slow-moving student to compute L̃pred. Formally, denoting the representations output from x′s by the student as q̃s and the corresponding instance-negative but class-positive keys as q̃+s-, we compute L̃pred by: L̃pred = ∥ p̃s ∥p̃s∥2 − q̃ + s∥q̃+s-∥2 ∥22 = 2− 2⟨ p̃s ∥p̃s∥2 , q̃+s∥q̃+s-∥2 ⟩. (7) Here p̃s ≜ hs(q̃s), q̃+s- ≜ f ′ s(g ′ s(xs)) and q̃s ≜ f p s (gs(x ′ s)). Note that q + s- and q̃ + s- are detached from the current computational graph during the distillation process. 3.3 TRAINING THE STUDENT With the slow-moving student and the teacher, Eq. 2, Eq. 6 and Eq. 7 aim at assisting the student to effectively learn powerful features through contrastive learning. The student still needs to learn features from the training data. For image classification, the task-specific loss is defined as the cross-entropy loss. Overall, the total loss Ltotal can be formulated as: Ltotal = λctrLctr + λpred(Lpred + L̃pred) + λclsLcls, (8) where λctr, λpred and λcls are three balancing factors. Lcls ≜ H(y, ys), where H(·) refers to the standard cross-entropy, y denotes the one-hotel label and ys is the student output. 4 EXPERIMENTS We validate the effectiveness of CoCoRD in improving the student performance. The student-teacher combinations are divided into two main categories: (1) students share the same architecture style with teachers, and (2) the architectures of the students are different from those of the teachers. Datasets. To investigate the performance improvements of students, we employ two benchmarks: (1) CIFAR100 (Krizhevsky et al., 2009) and (2) ImageNet-1K (Russakovsky et al., 2015). CIFAR100 has 100 classes and there are 500 training images and 100 validation images per class. ImageNet-1K, a large-scale dataset, contains 1000 classes and provides 1.28 million training images and 50K validation images. To test the transferability of features that students learn by CoCoRD, we utilize two more datasets: (1) STL-10 (Coates et al., 2011) and (2) TinyImageNet (Chrabaszcz et al., 2017). We only use the 5K labeled training images and 8K validation images from 10 classes in STL-10. TinyImageNet consists of 200 classes and each has 500 training images and 50 validation images. 4.1 EXPERIMENTS ON CIFAR100 We experiment on CIFAR100 with 13 student-teacher combinations in total1, 7 of which are studentteacher combinations with the same architecture style, and the remaining 6 are student-teacher combinations with different architectures. Table 1 focuses on student-teacher combinations with the same architecture style, while Table 2 provides experimental results of student-teacher combinations with different architectures. As can be observed in both tables, KD (Hinton et al., 2015), a simple yet effective method, provides a strong baseline. CoCoRD can consistently outperform KD and 1On CIFAR100, λctr=1, λcls=1, λpred=4. More training details and data augmentations are provided in the supplementary materials achieve highly competitive performance compared with other state-of-the-art methods. Note that mc in Formula 3 is set to 1 for the WRN-40-2/WRN-40-1 combination. Although the teacher projection head attached to WRN-40-2 is only randomly initialized and not updated during the distillation process, CoCoRD still achieves the state-of-the-art result. This implies the features provided by the well-trained teacher from the penultimate layer are already distinguishing, which are then projected into the representation space by the frozen teacher projection head fpt . Based on the discussion above, the teacher projection heads in Table 2 are randomly initialized since the difference in architecture style is very likely to bring about the difference in the input shape. Note that it is because of the projection heads that CoCoRD can achieve distillation under cross-architecture setting. The projection heads can project features at the penultimate layer of different shapes into one representation space, where we can easily define the contrastive loss based on Eq. 2. As shown in Table 2, CoCoRD is highly effective for combinations of different architectures. Even if the teacher projection head is not updated, CoCoRD can consistently achieve the best performance compared to those not combined with another method. Especially, for the resnet-32x4/ShuffleNetV2 pair, CoCoRD presents 77.28% Top-1 accuracy, which is 1.5% higher than the second best GCKT (75.78%). On the other hand, methods based on intermediate features perform poorly with differentarchitecture combinations. The observation suggests that CoCoRD can largely blur the requirement for significant similarities between students and teachers. We conjecture that knowledge distillation based on features at the penultimate layer can avoid the conflicts of different inductive biases that different models exploit. This indicates that the proposed CoCoRD is more generally applicable for student-teacher combinations with different architectures. Limitations. In Table 1, CoCoRD+KD does not bring further performance improvements over CoCoRD. The same phenomenon can be observed in Table 2. MobileNetV2 (Sandler et al., 2018) does not obtain more performance improvements with CoCoRD+KD. These phenomena indicate that further investigations are needed to combine CoCoRD with other knowledge distillation methods and extremely lightweight student models are still challenging for knowledge distillation. Linear probing. Following CRD (Tian et al., 2020), we employ linear probing to evaluate the transferability of the student features. We freeze the student and train a linear classifier on the global average pooling features of the student to perform a 10-way classification on STL10 and 200-way classification on TinyImageNet. As shown in Table. 3, CoCoRD exhibits strong transferability and outperform the second best (CRD+KD) by a large margin on the two datasets (2.04% improvement on STL10 and 2.32% on TinyImageNet). The proposed CoCoRD, which has a negligible performance drop on CIFAR100 compared with the teacher, (Please see Table. 1), shows better transferability than the teacher (5.32% improvement on STL10 and 6.01% TinyImageNet). The linear probing experiment indicates that CoCoRD-distilled models have better generalization ability. 4.2 EXPERIMENTS ON IMAGENET To investigate the scalability of CoCoRD to large-scale datasets, we employ ResNet-18 and ResNet-34 as the student-teacher combination to perform experiments on ImageNet-1K. For a fair comparison, we follow the standard PyTorch ImageNet training practice except that we have 100 training epochs like CRD and WCoRD. We also use the PyTorch-released ResNet-34/18 as our teacher/student. On ImageNet, we set λctr=1, λcls=1, λpred=4 and only calculate Lpred. The Top-1 and Top-5 error rates of different distillation methods are provided in Table 4 (the lower, the better). The results in Table 4 show that the proposed CoCoRD achieves the best performance on the large-scale ImageNet. The relative improvement of CoCoRD over WCoRD (Chen et al., 2021b) on Top-1 error is 14.45%, and the relative improvement of CoCoRD over CRD (Tian et al., 2020) on Top-1 error is 40.43%. Both improvements validate the scalability of the proposed CoCoRD to large-scale datasets. 4.3 ABLATION STUDY 4.3.1 STUDY OF ENCODER COMBINATIONS By default, we use the teacher to generate representations for contrastive learning and the slowmoving student is employed to produce representations of another view of the input of the student. To investigate how the representation quality affects the distillation performance, we utilize different models to provide those representations. For clarity, the model that generates dictionary-caching representations is referred to as contrastive encoder. The model that produces the instance-negative but class-positive representations is referred to as cognate encoder. Results are reported in Table 5. Comparing options A (the default option) and B, we can find that leveraging the pre-trained teacher to provide quality representations for contrastive learning is more beneficial for the distillation. Besides, removing the cognate encoder and setting λpred to zero (option E) lead to poor performance, suggesting the cognate encoder can alleviate the adverse impact of the potential noise. If we remove the contrastive encoder and still use the dictionary with cognate encoder (option F), the distillation process fails. The results in Table 5 can support the effectiveness of each encoder in CoCoRD. 4.3.2 STUDY OF MOMENTUM As shown in Formulas 3 and 4, mc controls the progressing speed of the teacher projection head f p t , while mr manages the speed of the slow-moving student and its projection head. To investigate the impact of momentum, we employ resnet110 as the teacher to train resnet32 with different mc and mr. The results are reported in Table 6. When mc=mr=0 and mc=mr=1, CoCoRD can improve the student performance. The effectiveness in both cases implies CoCoRD is robust. Besides, with mr fixed, a large value of mr (e.g. 0.99 or 0.999) works much better than mr=0, suggesting that consistent representations in the teacher dictionary are beneficial for the distillation. 4.3.3 STUDY OF HYPER-PARAMETERS The temperature τ . The value of τ in Eq. 2 varies from 0.07 to 0.11. As shown in Figure 3(a), CoCoRD is sensitive to τ . Both extremely high and low temperature lead to sub-optimal performance. As suggested in CRD (Tian et al., 2020), we set τ to 0.1 for experiments on CIFAR100, while τ is set to 0.07 on ImageNet. We suggest tuning the value of τ based on the classification difficulty. The size of the teacher dictionary. The number of negative keys is determined by the teacher dictionary size N . To investigate the effects of the teacher dictionary size, we validate various values of N . As shown in Figure 3(b), extremely small teacher dictionary provides insufficient negative keys, leading to sub-optimal performance. However, the extremely large teacher dictionary can introduce noise, which adversely affects the distillation performance. Based on our experiments, N=2048 should suffice on CIFAR100 while N= 65536 on ImageNet. Note that the teacher dictionary in CoCoRD is significantly smaller than the memory banks in CRD (Tian et al., 2020) and WCoRD (Chen et al., 2021b), which is more economic for large-scale datasets. The balancing factors. We conduct experiments on CIFAR100 to investigate the effects of the three balancing factors λctr, λcls and λpred. We use resnet32resnet110 as the student-teacher combination. For experiments on balancing factors, we set τ=0.1, N=2048, mc=0.999 and mr=0.9. “%” denotes we set the balance factor to 0 and “!” means we set the balance factor to the corresponding value provided in the second row. Details on simple grid search for each balancing factor can be found in the supplementary material. As we can see from Table. 7, all components in CoCoRD are essential for achieving high distillation performance. When λctr is set to 0, there is a serious performance drop, which indicates contrasting student representa- tions with negative keys in teacher dictionary is necessary in improving the student performance. Moreover, by comparing the result when λpred=0 with the result when λpred=4, we can see that the slow-moving student can reduce the negative effects of the noise in the teacher dictionary. 4.4 TRANSFER LEARNING We further validate the feature quality of CoCoRD-distilled models by transferring the model weights to object detection task, including PASCAL VOC (Everingham et al., 2010) and COCO detection (Lin et al., 2014). We fine-tune the pre-trained models in an end-to-end manner on the target datasets. The detector for PASCAL VOC is Faster R-CNN (Ren et al., 2015) with a backbone of R50-C4. For COCO object detection, the model is Mask-RCNN (He et al., 2017) with the R50-C4 backbone. Note that the CoCoRD-distilled ResNet50 can outperform the teacher ResNet101 by 0.2% top-1 accuracy on classification. As shown in Table 8, the CoCoRD-initialized detectors exhibit better performance than the student-initialized and CRD-initialized counterparts. The valid reuse of model weights further demonstrates the transferability of CoCoRD-distilled features. 5 CONCLUSION In this paper, we propose a contrastive-learning-based knowledge distillation method named Contrastive Consistent Representation Distillation. From a perspective of regarding contrastive learning as a dictionary looking-up task, we build a fixed-size dictionary to cache consistent teacher representations. Besides, to alleviate the adverse impact of the potential noise in the teacher dictionary, we employ a slow-moving student, implemented as a momentum-based moving average of the student, to provide instance-negative but class-positive targets. CoCoRD does not employ the entire dataset as the memory bank, which is economic for large-scale datasets. Extensive experiments demonstrate that CoCoRD, which utilizes fewer negative keys, can boost the performance of the students on diverse image classification datasets. Additionally, the models distilled by CoCoRD on ImageNet classification can efficiently improve object detection performance on PASCAL VOC and COCO. A APPENDIX A.1 QUANTITATIVE RESULTS ON THE ACHIEVED SPEED-UP, MEMORY REDUCE AND OTHERS In the following three tables, we provide quantitative results on the achieved speed-up, memory cost reduce, and other quantitative information about the teacher/student (T/S) combinations used on CIFAR100 (in Tabs. 1 and 2) and those T/S combinations used on ImageNet (Tabs. 4 and 8). The results are measured with Intel Core i7-8700 CPU on Ubuntu 20.04 operating system and memory cost is measured by Pytorch Profiler in a forward pass. Additionally, we compare the size of the teacher dictionary in the proposed CoCoRD with the size of the memory banks in CRD. Note that the keys in CRD memory banks are only 128-d while the keys in the proposed CoCoRD teacher dictionary are 2048-d. Even with higher dimensions of the stored keys, CoCoRD are still more storage efficient. A.2 THEORETICAL STUDY Given two deep neural networks, a teacher fT and a student fS , and let x be the network input. We denote representations at the penultimate layer as fT (x) and fS(x), respectively. We would like to bring fS(xi) and fT (xi) close while pushing apart fS(xi) and fT (xj) (xi and xj represent different training samples). For clear notation, we define variables S and T for the student representations and the teacher ones of the data, respectively: x ∼ p(x); S = fS(x); T = fT (x). Let us define a distribution q with variable C. The latent variable C decides whether the tuple (fS(xi), fT (xj)) is drawn form the joint distribution p(T, S) (when C=1) or drawn from the product of marginal distributions p(S)p(T ) (when C=0). q(T, S|C = 1) = p(T, S), q(T, S|C = 0) = p(T )p(S) Suppose we are given 1 congruent pair drawn from the joint distribution (i.e. the same input provided to T and S) for every N incongruent pairs drawn from the product of marginals (independent randomly inputs provided to T and S). Then the priors on the latent C are: q(C = 1) = 1 N + 1 , q(C = 0) = N N + 1 . By Bayes’ rule and simple manipulations, the posterior for C = 1 is given by: q(C = 1|T, S) = q(T, S|C = 1)q(C = 1) q(T, S|C = 0)q(C = 0) + q(T, S|C = 1)q(C = 1) = p(T, S) p(T, S) +Np(T )p(S) . We can observe a connection with mutual information: log q(C = 1|T, S) = − log(1 +N p(T )p(S) p(T, S) ) ≤ − log(N) + log p(T, S) p(T )p(S) . Taking expectation on both sides w.r.t p(T, S) and rearranging gives us: I(T ;S) ≥ log(N) + Eq(T,S|C=1) log q(C = 1|T, S), where I(T ;S) is the mutual information between the distributions of the teacher and student representations. Though we do not know the true distribution q(C = 1|T, S), a neural network can be used to estimate whether a pair comes from the joint distribution or the marginals. By maximizing KL divergence between the joint distribution p(T, S) and the product of marginal distributions p(T )p(S), we can maximize the mutual information between the student representations and the teacher representations.
1. What are the key contributions and novel aspects introduced by the paper in contrastive learning? 2. What are the strengths of the paper, particularly in addressing memory usage and feature recency issues in existing distillation methods? 3. Do you have any concerns or questions regarding the proposed method, its effectiveness, and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper presents a way of performing knowledge distillation using contrastive learning. The authors point out that existing distillation methods that use contrastive learning have two main issues: (i) memory intensive - as it needs to store the representations of the whole dataset to construct negatives, (ii) the negatives which exist in the memory bank might not have been updated in a while, thereby potentially making the task of differentiating the positives (which are computed fresh in each iteration) from negatives (in the bank) easier than it should be. To this end, the authors propose using a queue data structure instead for storing the negatives. This queue is updated with fresh representations from the teacher and can also be much smaller than the size of the whole training dataset. Empirically, this alternative way of performing contrastive distillation helps in improving the performance of the distilled student a bit, and also helps in certain transfer learning setups. Strengths And Weaknesses Strengths The issue about memory usage is a valid point to raise. CRD [1] does indeed need a big memory bank to store the features of the whole training dataset, an issue which will get worse with a bigger dataset (e.g., ImageNet). This becomes particularly important when someone has to save the model and resume training; one does not only need to store the teacher/student’s weights, but also need to store the whole training data’s feature set. The other main issue, which is about the negatives potentially being out of touch with the fresh features that the teacher is computing, could be important as well. I use the phrase “potentially” because it is not clear how big of an issue that really is in practice. The paper is well written, the issues clearly explained, and the mathematical formulation is easy to follow. Weaknesses Even though the issues that the authors have raised pertaining to CRD are valid, in the sense that it would be nice if those did not exist, those are not the issues which I feel stand out too much. As I said in the “Strengths” section, it is not clear how much effect it will have that the negatives sampled for a query were updated a while ago. The reason the concern raised above (the importance of recency in feature bank) is important is because the results, especially on the most comprehensive evaluation setup CIFAR-100, don’t show that big of an improvement over the existing contrastive distillation setups like CRD. Sometimes they are even worse. But, I think it is not just that the issue of “recency in feature banks” (in CRD) was not important enough, and hence there wasn’t that much improvement brought about by the proposed method. I think that the new proposed way of sampling negatives could potentially be not so ideal in certain situations. Here is how: the batch size for CIFAR-100 setup is set to 64 (as per supplementary) while the queue size is set to 2048. This means that a particular batch of features coming out of the teacher will persist in the queue for (2048/64) 32 iterations. This means that for many different query images, there will be the same set of negatives being used (32 times) until they finally get out of the queue. Compare this to CRD, where for each query image, the negatives will be randomly coming from the whole training set, thereby reducing the possibility of that issue. I do not know how important this difference is in practice, but the fact that there is not a consistent increase in performance for CIFAR-100 might hint at a problem like this. In summary, the method that has been proposed is not a superset of CRD in terms of (theoretical) effectiveness. Overall, there is not a straightforward way to fix the issues that the authors have raised about CRD. They need to introduce an additional student network to counter some issues because of the queue data structure, which leads to additional loss components which need to be tuned. Clarity, Quality, Novelty And Reproducibility The paper is quite clear in the issues raised and their proposed fix to the issues. I also think the authors have done a good job covering different kind of experiments, e.g., image classification/transfer learning, even if I feel the results themselves are not that impressive. The authors have included sufficient details for someone to reproduce the experiments.
ICLR
Title Contrastive Consistent Representation Distillation Abstract The combination of knowledge distillation with contrastive learning has great potential to distill structural knowledge. Most of the contrastive-learning-based distillation methods treat the entire training dataset as the memory bank and maintain two memory banks, one for the student and one for the teacher. Besides, the representations in the two memory banks are updated in a momentum manner, leading to representation inconsistency. In this work, we propose Contrastive Consistent Representation Distillation (CoCoRD) to provide consistent representations for efficient contrastive-learning-based distillation. Instead of momentum-updating the cached representations, CoCoRD updates the encoders in a momentum manner. Specifically, the teacher is equipped with a momentum-updated projection head to generate consistent representations. The teacher representations are cached in a fixed-size queue which serves as the only memory bank in CoCoRD and is significantly smaller than the entire training dataset. Additionally, a slow-moving student, implemented as a momentum-based moving average of the student, is built to facilitate contrastive learning. CoCoRD, which utilizes only one memory bank and much fewer negative keys, provides highly competitive results under typical teacher-student settings. On ImageNet, CoCoRD-distilled ResNet50 outperforms the teacher ResNet101 by 0.2% top-1 accuracy. Furthermore, in PASCAL VOC and COCO detection, the detectors whose backbones are initialized by CoCoRDdistilled models exhibit considerable performance improvements. 1 INTRODUCTION The remarkable performance of convolutional neural networks (CNNs) in various computer vision tasks, such as image recognition (He et al., 2016; Huang et al., 2017) and object detection (Girshick, 2015; Ren et al., 2015; Redmon & Farhadi, 2017), has triggered interest in employing these powerful models beyond benchmark datasets. However, the cutting-edge performance of CNNs is always accompanied by substantial computational costs and storage consumption. Early study has suggested that shallow feedforward networks can approximate arbitrary functions (Hornik et al., 1989). Numerous endeavors have been made to reduce computational overheads and storage burdens. Among those endeavors, Knowledge Distillation, a widely discussed topic, presents a potential solution by training a compact student model with knowledge provided by a cumbersome but well-trained teacher model. The majority of distillation methods induce the student to imitate the teacher representations (Zagoruyko & Komodakis, 2017; Park et al., 2019; Tian et al., 2020; Hinton et al., 2015; Chen et al., 2021b;c; Yim et al., 2017; Tung & Mori, 2019; Ahn et al., 2019). Although representations provide more learning information, the difficulty of defining appropriate metrics to align the student representations to the teacher ones challenges the distillation performance. Besides, failing to capture the dependencies between representation dimensions results in lame performance. To enhance performance, researchers attempt to distill structural knowledge by establishing connections between knowledge distillation and contrastive learning (Tian et al., 2020; Chen et al., 2021b). To efficiently retrieve representations of negative samples for contrastive learning, memory banks cache representations which are updated in a momentum manner, as shown in Fig. 1. However, the student is optimized sharply by the training optimizer. The student representations in the memory bank are inconsistent because the updated representations differ from those not updated in that iteration. Therefore, the student can easily contrast the positive and negative samples, keeping the student from learning good features. The storage size of memory bank is another factor of concern when applying contrastive-learning-based distillation methods. As in (Tian et al., 2020; Chen et al., 2021b), there are two memory banks and each of them contains representations of all training images, leading to massive GPU memory usage on large-scale datasets. Motivated by the discussion above, we propose Contrastive Consistent Representation Distillation (CoCoRD) as a novel way of distilling consistent representations with one fixed-size memory bank. Specifically, CoCoRD is composed of four major components, as shown in Fig. 2: (1) a fixed-size queue which is referred to as the teacher dictionary, (2) a teacher, (3) a student, and (4) a slow-moving student. From a perspective of considering contrastive learning as a dictionary look-up task, the teacher dictionary is regarded as the memory bank, where all the representations serve as the negative keys. The encoded representations of the current batch from the teacher are enqueued. Once the queue is full of representations, the oldest ones are dequeued. By introducing a queue, the size of the memory bank is decoupled from dataset size and batch size, allowing it to be considerably smaller than dataset size and larger than the commonly-used batch size. The student is followed by a projection head, which maps the student features to a representation space. The teacher projection head is initialized the same as the student one and is a momentum moving average of the student projection head if the teacher and the student have the same feature dimension; otherwise, the teacher projection head is randomly initialized and not updated. Since the contrast through the teacher dictionary is to draw distinctions on an instance level, the cached teacher representations which share the same class label as the student ones are mistakenly treated as negative keys, resulting in noise in the dictionary. To alleviate the impact of the noise, a slow-moving student, implemented as a momentum moving average of the student, is proposed to pull together anchor representations and class-positive ones. As shown in Fig. 2, with a momentum-updated projection head, the slow-moving student projects a data augmentation version of the anchor image to the representation space, which serves as the instance-negative but class-positive key. The main contributions are listed as follows: • We utilize only one lightweight memory bank (teacher dictionary), where all the representations are treated as negative keys. We experimentally demonstrate that a miniature teacher dictionary with much fewer negative keys can be sufficient for contrastive learning in knowledge distillation. • We equip the well-trained teacher with a momentum-updated projection head to provide consistent representations for the teacher dictionary. Besides, a slow-moving student provides class-positive representations to alleviate the impact of the potential noise in the teacher dictionary. • We verify the effectiveness of CoCoRD by achieving the state-of-the-art performance in 11 out of 13 student-teacher combinations in terms of model compression. On ImageNet, the CoCoRDdistilled ResNet50 can outperform the teacher ResNet101 by 0.2% top-1 accuracy. Moreover, we initialize the backbones in object detection with CoCoRD-distilled weights and observe considerable performance improvements over the counterparts that the vanilla students initialize. 2 RELATED WORK 2.1 KNOWLEDGE DISTILLATION Hinton et al. (Hinton et al., 2015) first propose distilling the softened logits from the teacher to the student. After the representative work (Hinton et al., 2015), various knowledge distillation methods (Wang & Li, 2021; Song et al., 2021; Passban et al., 2020; Chen et al., 2021a;c) aim to distill more informative knowledge via intermediate features. Among them, Passban et al. (Passban et al., 2020) fuse all teacher information to avoid the loss of significant knowledge. Chen et al. (Chen et al., 2021a) propose semantic calibration based on the attention mechanism for adaptively assigning cross-layer knowledge. Chen et al. (Chen et al., 2021c) introduce a novel framework via knowledge review in which the knowledge of multiple layers in the teacher can be distilled for supervising one layer of the student. However, the methods mentioned above have difficulty in defining appropriate metrics to measure the distance between the student representations and the counterparts from the teacher. There are a few recent works exploiting the dependencies between representation dimensions based on contrastive learning (Tian et al., 2020; Chen et al., 2021b) for boosting the distillation performance. In particular, Tian et al. (Tian et al., 2020) formulate capturing structural knowledge as contrastive learning and maximize the lower bound of mutual information between the teacher and the student. Chen et al. (Chen et al., 2021b) leverage primal and dual forms of Wasserstein distance, where the dual form yields a contrastive learning objective. In summary, the core of knowledge distillation lies in the definition of knowledge and the way the knowledge is distilled. 2.2 CONTRASTIVE LEANING The main goal of contrastive learning is to learn a representation space where anchor representations stay close to the representations of the positive samples and distant from those of the negative samples. Contrastive learning is a powerful approach in self-supervised learning. To learn powerful feature representations in an unsupervised fashion, Wu et al. (Wu et al., 2018) consider each instance as a distinct class of its own and use noise contrastive estimation (NCE) to tackle the computational challenges. Contrastive learning is first combined with knowledge distillation by CRD (Tian et al., 2020), which aims at exploring structural knowledge. In addition to CRD (Tian et al., 2020), WCoRD (Chen et al., 2021b) combines LCKT (Chen et al., 2021b) and GCKT (Chen et al., 2021b) based on Wasserstein dependency measure in contrastive learning (Ozair et al., 2019). However, the memory banks in CRD and WCoRD contain representations of all the training images, which bring about storage challenges on large-scale datasets. Besides, momentum updates to representations can also lead to inconsistent representations that negatively affect the distillation performance. From a perspective of considering contrastive learning as a dictionary lookup task, we implement the memory bank as a first-in-first-out queue where all included representations serve as negative keys. 3 METHOD The key idea of combining knowledge distillation with contrastive learning is straightforward. With knowledge distillation, a proficient teacher can provide consistent representations that are beneficial for contrastive learning. With contrastive learning, the student can obtain powerful features whose representations are close to the positive teacher representations and distant from the negative ones in a representation space. Contrastive learning can be generally formulated as a dictionary look-up task. Given a query q and a dictionary K with N keys: K = {k1, · · · , kN}, contrastive learning matches the query q to the positive key k+and pushes q away from the negative keys cached in K. 3.1 CONTRAST AS LOOKING UP IN THE TEACHER DICTIONARY In CoCoRD, the negative keys are encoded by the teacher and cached in a fixed-size queue which is referred to as the teacher dictionary. Given an input image x, two views of x under random data augmentations form a positive pair (a query and a positive sample), which is encoded in each iteration. We define the input to the student S as the query xs and the input to the teacher T as the positive sample xt. The outputs at the penultimate layer (before the last fully-connected layer) are projected to a representation space by a projection head. For simplicity of notation, the student nested functions up to the penultimate layer are denoted as gs(·) and the student projection head is denoted as fps (·). Therefore, the query representations qs and the positive keys k+t are given by: qs = f p s (gs(xs)), k + t = f p t (gt(xt)), (1) where gt(·) denotes the teacher nested functions up the penultimate layer and fpt (·) is the teacher projection head. fps and f p t are two-layer perceptrons. Besides, the cached i-th negative key in the queue is denoted as k-ti which is produced the same way as k + t but from the preceding batches. The fixed-size teacher dictionary K={k-t1 , · · · , k - tN } contains N negative keys. The representations of the current batch are added to the queue, while the oldest representations are removed from the queue. The contrastive loss. The value of the contrastive loss should be small when qs is close to k+t and distant from k-ti in the representation space. To meet this condition, we consider the wildly-used and effective contrastive loss function: InfoNCE (Van den Oord et al., 2018): Lctr = − log exp(qs · k+t /τ) exp(qs · k+t /τ) + ∑N i=1 exp(qs · k-ti/τ) , (2) where τ is a hyper-parameter that controls the concentration level. N is the size of the teacher dictionary. Lctr can be intuitively interpreted as the log loss of a softmax-based (N+1)-way classification task. In our case, we attempt to classify qs as k+t in the scope of {k+t } ∪ {k-t1 , k - t2 , · · · , k - tN }. The consistency in the teacher dictionary. The introduction of the fixed-size teacher dictionary decouples the size of the memory bank from batch size and dataset size. The teacher dictionary can be larger than the commonly-used batch and smaller than the dataset. Therefore, we bypass huge batch, which aims at providing in-batch negative samples. Besides, we can avoid sampling inconsistent negative keys from the memory bank. The core to learning good features by contrastive learning lies in the rich and challenging negative representations. In CRD (Tian et al., 2020) and WCoRD (Chen et al., 2021b), the negative keys are momentum updated. The momentum update to the negative keys brings about two main issues: (1) the negative keys were updated only when they were last processed, and (2) the update interval for each negative key can be highly different. The two issues cause inconsistent negative keys. To provide consistent negative keys, we update the teacher projection head in a momentum manner. Since gt are frozen in the distillation framework, momentum updating the teacher projection head results in consistent negative keys. Specifically, denoting the parameters of fpt as ωt and those of f p s as ωs, we update ωt as: ωt ← mcωt + (1−mc)ωs. (3) mc ∈ [0, 1] is a momentum coefficient which adjusts the update smoothness. Since ωs are optimized by the training optimizer, the momentum update of ωt makes the teacher projection head f p t progress more smoothly than the student projection head fps . Therefore, the difference between the teacher projection heads at different iterations can be made small. As a result, the negative keys encoded at different iterations can be consistent. Besides, the teacher dictionary itself is gradually updated. The representations of the current batch are enqueued, while the representations of the oldest batch are dequeued. This gradual replacement is beneficial for maintaining the consistency of the queue since the oldest representations are the least consistent with the current ones. 3.2 REPRESENTATIONS OF ONE CLASS FLOCK TOGETHER As shown in Eq. 2, classifying qs as k+t in the scope of {k+t , k-t1 , k - t2 , · · · , k - tn} is a discrimination on an instance level. However, k-ti which shares the same class label with qs should be close to qs in the representation space. Simply rejecting those k-ti is not beneficial for the student learning good features. To bring qs closer to its instance-negative but class-positive keys, we introduce a slow-moving student whose nested functions up to the penultimate layer are denoted as g′s(·). Specifically, the slow-moving student is implemented as a momentum moving average of the student. The slow-moving student is also accompanied with a projection head f ′s, which is also updated in a momentum manner. Denoting the parameters of gs as θs, the parameters of g′s as θ ′ s and those of f ′ s as w ′ s, we update θ ′ s and w ′ s by: θ′s ← mrθ′s + (1−mr)θs w′s ← mrw′s + (1−mr)ws, (4) where mr ∈ [0, 1] is another momentum coefficient and ws denotes the parameters of the student projection head fps . Therefore, the instance-negative but class-positive keys q + s- can be obtain by: q+s- = f ′ s(g ′ s(x ′ s)), ◁ the instance-negative but class-positive keys (5) where x′s is another view of x under the random data augmentations. Instead of directly narrowing down the distance between qs and q+s-, we use qs to predict q + s-, which softens the constraint. Formally, a predictor hs, implemented as a two-layer perceptron, is proposed to produce the prediction ps ≜ hs(qs). The loss is simply defined as the mean squared error between l2 normalized ps and q+s-: Lpred = ∥ ps ∥ps∥2 − q + s∥q+s-∥2 ∥22 = 2− 2⟨ ps ∥ps∥2 , q+s∥q+s-∥2 ⟩. (6) Furthermore, we symmetrize the loss by feeding x′s to the student and xs to the slow-moving student to compute L̃pred. Formally, denoting the representations output from x′s by the student as q̃s and the corresponding instance-negative but class-positive keys as q̃+s-, we compute L̃pred by: L̃pred = ∥ p̃s ∥p̃s∥2 − q̃ + s∥q̃+s-∥2 ∥22 = 2− 2⟨ p̃s ∥p̃s∥2 , q̃+s∥q̃+s-∥2 ⟩. (7) Here p̃s ≜ hs(q̃s), q̃+s- ≜ f ′ s(g ′ s(xs)) and q̃s ≜ f p s (gs(x ′ s)). Note that q + s- and q̃ + s- are detached from the current computational graph during the distillation process. 3.3 TRAINING THE STUDENT With the slow-moving student and the teacher, Eq. 2, Eq. 6 and Eq. 7 aim at assisting the student to effectively learn powerful features through contrastive learning. The student still needs to learn features from the training data. For image classification, the task-specific loss is defined as the cross-entropy loss. Overall, the total loss Ltotal can be formulated as: Ltotal = λctrLctr + λpred(Lpred + L̃pred) + λclsLcls, (8) where λctr, λpred and λcls are three balancing factors. Lcls ≜ H(y, ys), where H(·) refers to the standard cross-entropy, y denotes the one-hotel label and ys is the student output. 4 EXPERIMENTS We validate the effectiveness of CoCoRD in improving the student performance. The student-teacher combinations are divided into two main categories: (1) students share the same architecture style with teachers, and (2) the architectures of the students are different from those of the teachers. Datasets. To investigate the performance improvements of students, we employ two benchmarks: (1) CIFAR100 (Krizhevsky et al., 2009) and (2) ImageNet-1K (Russakovsky et al., 2015). CIFAR100 has 100 classes and there are 500 training images and 100 validation images per class. ImageNet-1K, a large-scale dataset, contains 1000 classes and provides 1.28 million training images and 50K validation images. To test the transferability of features that students learn by CoCoRD, we utilize two more datasets: (1) STL-10 (Coates et al., 2011) and (2) TinyImageNet (Chrabaszcz et al., 2017). We only use the 5K labeled training images and 8K validation images from 10 classes in STL-10. TinyImageNet consists of 200 classes and each has 500 training images and 50 validation images. 4.1 EXPERIMENTS ON CIFAR100 We experiment on CIFAR100 with 13 student-teacher combinations in total1, 7 of which are studentteacher combinations with the same architecture style, and the remaining 6 are student-teacher combinations with different architectures. Table 1 focuses on student-teacher combinations with the same architecture style, while Table 2 provides experimental results of student-teacher combinations with different architectures. As can be observed in both tables, KD (Hinton et al., 2015), a simple yet effective method, provides a strong baseline. CoCoRD can consistently outperform KD and 1On CIFAR100, λctr=1, λcls=1, λpred=4. More training details and data augmentations are provided in the supplementary materials achieve highly competitive performance compared with other state-of-the-art methods. Note that mc in Formula 3 is set to 1 for the WRN-40-2/WRN-40-1 combination. Although the teacher projection head attached to WRN-40-2 is only randomly initialized and not updated during the distillation process, CoCoRD still achieves the state-of-the-art result. This implies the features provided by the well-trained teacher from the penultimate layer are already distinguishing, which are then projected into the representation space by the frozen teacher projection head fpt . Based on the discussion above, the teacher projection heads in Table 2 are randomly initialized since the difference in architecture style is very likely to bring about the difference in the input shape. Note that it is because of the projection heads that CoCoRD can achieve distillation under cross-architecture setting. The projection heads can project features at the penultimate layer of different shapes into one representation space, where we can easily define the contrastive loss based on Eq. 2. As shown in Table 2, CoCoRD is highly effective for combinations of different architectures. Even if the teacher projection head is not updated, CoCoRD can consistently achieve the best performance compared to those not combined with another method. Especially, for the resnet-32x4/ShuffleNetV2 pair, CoCoRD presents 77.28% Top-1 accuracy, which is 1.5% higher than the second best GCKT (75.78%). On the other hand, methods based on intermediate features perform poorly with differentarchitecture combinations. The observation suggests that CoCoRD can largely blur the requirement for significant similarities between students and teachers. We conjecture that knowledge distillation based on features at the penultimate layer can avoid the conflicts of different inductive biases that different models exploit. This indicates that the proposed CoCoRD is more generally applicable for student-teacher combinations with different architectures. Limitations. In Table 1, CoCoRD+KD does not bring further performance improvements over CoCoRD. The same phenomenon can be observed in Table 2. MobileNetV2 (Sandler et al., 2018) does not obtain more performance improvements with CoCoRD+KD. These phenomena indicate that further investigations are needed to combine CoCoRD with other knowledge distillation methods and extremely lightweight student models are still challenging for knowledge distillation. Linear probing. Following CRD (Tian et al., 2020), we employ linear probing to evaluate the transferability of the student features. We freeze the student and train a linear classifier on the global average pooling features of the student to perform a 10-way classification on STL10 and 200-way classification on TinyImageNet. As shown in Table. 3, CoCoRD exhibits strong transferability and outperform the second best (CRD+KD) by a large margin on the two datasets (2.04% improvement on STL10 and 2.32% on TinyImageNet). The proposed CoCoRD, which has a negligible performance drop on CIFAR100 compared with the teacher, (Please see Table. 1), shows better transferability than the teacher (5.32% improvement on STL10 and 6.01% TinyImageNet). The linear probing experiment indicates that CoCoRD-distilled models have better generalization ability. 4.2 EXPERIMENTS ON IMAGENET To investigate the scalability of CoCoRD to large-scale datasets, we employ ResNet-18 and ResNet-34 as the student-teacher combination to perform experiments on ImageNet-1K. For a fair comparison, we follow the standard PyTorch ImageNet training practice except that we have 100 training epochs like CRD and WCoRD. We also use the PyTorch-released ResNet-34/18 as our teacher/student. On ImageNet, we set λctr=1, λcls=1, λpred=4 and only calculate Lpred. The Top-1 and Top-5 error rates of different distillation methods are provided in Table 4 (the lower, the better). The results in Table 4 show that the proposed CoCoRD achieves the best performance on the large-scale ImageNet. The relative improvement of CoCoRD over WCoRD (Chen et al., 2021b) on Top-1 error is 14.45%, and the relative improvement of CoCoRD over CRD (Tian et al., 2020) on Top-1 error is 40.43%. Both improvements validate the scalability of the proposed CoCoRD to large-scale datasets. 4.3 ABLATION STUDY 4.3.1 STUDY OF ENCODER COMBINATIONS By default, we use the teacher to generate representations for contrastive learning and the slowmoving student is employed to produce representations of another view of the input of the student. To investigate how the representation quality affects the distillation performance, we utilize different models to provide those representations. For clarity, the model that generates dictionary-caching representations is referred to as contrastive encoder. The model that produces the instance-negative but class-positive representations is referred to as cognate encoder. Results are reported in Table 5. Comparing options A (the default option) and B, we can find that leveraging the pre-trained teacher to provide quality representations for contrastive learning is more beneficial for the distillation. Besides, removing the cognate encoder and setting λpred to zero (option E) lead to poor performance, suggesting the cognate encoder can alleviate the adverse impact of the potential noise. If we remove the contrastive encoder and still use the dictionary with cognate encoder (option F), the distillation process fails. The results in Table 5 can support the effectiveness of each encoder in CoCoRD. 4.3.2 STUDY OF MOMENTUM As shown in Formulas 3 and 4, mc controls the progressing speed of the teacher projection head f p t , while mr manages the speed of the slow-moving student and its projection head. To investigate the impact of momentum, we employ resnet110 as the teacher to train resnet32 with different mc and mr. The results are reported in Table 6. When mc=mr=0 and mc=mr=1, CoCoRD can improve the student performance. The effectiveness in both cases implies CoCoRD is robust. Besides, with mr fixed, a large value of mr (e.g. 0.99 or 0.999) works much better than mr=0, suggesting that consistent representations in the teacher dictionary are beneficial for the distillation. 4.3.3 STUDY OF HYPER-PARAMETERS The temperature τ . The value of τ in Eq. 2 varies from 0.07 to 0.11. As shown in Figure 3(a), CoCoRD is sensitive to τ . Both extremely high and low temperature lead to sub-optimal performance. As suggested in CRD (Tian et al., 2020), we set τ to 0.1 for experiments on CIFAR100, while τ is set to 0.07 on ImageNet. We suggest tuning the value of τ based on the classification difficulty. The size of the teacher dictionary. The number of negative keys is determined by the teacher dictionary size N . To investigate the effects of the teacher dictionary size, we validate various values of N . As shown in Figure 3(b), extremely small teacher dictionary provides insufficient negative keys, leading to sub-optimal performance. However, the extremely large teacher dictionary can introduce noise, which adversely affects the distillation performance. Based on our experiments, N=2048 should suffice on CIFAR100 while N= 65536 on ImageNet. Note that the teacher dictionary in CoCoRD is significantly smaller than the memory banks in CRD (Tian et al., 2020) and WCoRD (Chen et al., 2021b), which is more economic for large-scale datasets. The balancing factors. We conduct experiments on CIFAR100 to investigate the effects of the three balancing factors λctr, λcls and λpred. We use resnet32resnet110 as the student-teacher combination. For experiments on balancing factors, we set τ=0.1, N=2048, mc=0.999 and mr=0.9. “%” denotes we set the balance factor to 0 and “!” means we set the balance factor to the corresponding value provided in the second row. Details on simple grid search for each balancing factor can be found in the supplementary material. As we can see from Table. 7, all components in CoCoRD are essential for achieving high distillation performance. When λctr is set to 0, there is a serious performance drop, which indicates contrasting student representa- tions with negative keys in teacher dictionary is necessary in improving the student performance. Moreover, by comparing the result when λpred=0 with the result when λpred=4, we can see that the slow-moving student can reduce the negative effects of the noise in the teacher dictionary. 4.4 TRANSFER LEARNING We further validate the feature quality of CoCoRD-distilled models by transferring the model weights to object detection task, including PASCAL VOC (Everingham et al., 2010) and COCO detection (Lin et al., 2014). We fine-tune the pre-trained models in an end-to-end manner on the target datasets. The detector for PASCAL VOC is Faster R-CNN (Ren et al., 2015) with a backbone of R50-C4. For COCO object detection, the model is Mask-RCNN (He et al., 2017) with the R50-C4 backbone. Note that the CoCoRD-distilled ResNet50 can outperform the teacher ResNet101 by 0.2% top-1 accuracy on classification. As shown in Table 8, the CoCoRD-initialized detectors exhibit better performance than the student-initialized and CRD-initialized counterparts. The valid reuse of model weights further demonstrates the transferability of CoCoRD-distilled features. 5 CONCLUSION In this paper, we propose a contrastive-learning-based knowledge distillation method named Contrastive Consistent Representation Distillation. From a perspective of regarding contrastive learning as a dictionary looking-up task, we build a fixed-size dictionary to cache consistent teacher representations. Besides, to alleviate the adverse impact of the potential noise in the teacher dictionary, we employ a slow-moving student, implemented as a momentum-based moving average of the student, to provide instance-negative but class-positive targets. CoCoRD does not employ the entire dataset as the memory bank, which is economic for large-scale datasets. Extensive experiments demonstrate that CoCoRD, which utilizes fewer negative keys, can boost the performance of the students on diverse image classification datasets. Additionally, the models distilled by CoCoRD on ImageNet classification can efficiently improve object detection performance on PASCAL VOC and COCO. A APPENDIX A.1 QUANTITATIVE RESULTS ON THE ACHIEVED SPEED-UP, MEMORY REDUCE AND OTHERS In the following three tables, we provide quantitative results on the achieved speed-up, memory cost reduce, and other quantitative information about the teacher/student (T/S) combinations used on CIFAR100 (in Tabs. 1 and 2) and those T/S combinations used on ImageNet (Tabs. 4 and 8). The results are measured with Intel Core i7-8700 CPU on Ubuntu 20.04 operating system and memory cost is measured by Pytorch Profiler in a forward pass. Additionally, we compare the size of the teacher dictionary in the proposed CoCoRD with the size of the memory banks in CRD. Note that the keys in CRD memory banks are only 128-d while the keys in the proposed CoCoRD teacher dictionary are 2048-d. Even with higher dimensions of the stored keys, CoCoRD are still more storage efficient. A.2 THEORETICAL STUDY Given two deep neural networks, a teacher fT and a student fS , and let x be the network input. We denote representations at the penultimate layer as fT (x) and fS(x), respectively. We would like to bring fS(xi) and fT (xi) close while pushing apart fS(xi) and fT (xj) (xi and xj represent different training samples). For clear notation, we define variables S and T for the student representations and the teacher ones of the data, respectively: x ∼ p(x); S = fS(x); T = fT (x). Let us define a distribution q with variable C. The latent variable C decides whether the tuple (fS(xi), fT (xj)) is drawn form the joint distribution p(T, S) (when C=1) or drawn from the product of marginal distributions p(S)p(T ) (when C=0). q(T, S|C = 1) = p(T, S), q(T, S|C = 0) = p(T )p(S) Suppose we are given 1 congruent pair drawn from the joint distribution (i.e. the same input provided to T and S) for every N incongruent pairs drawn from the product of marginals (independent randomly inputs provided to T and S). Then the priors on the latent C are: q(C = 1) = 1 N + 1 , q(C = 0) = N N + 1 . By Bayes’ rule and simple manipulations, the posterior for C = 1 is given by: q(C = 1|T, S) = q(T, S|C = 1)q(C = 1) q(T, S|C = 0)q(C = 0) + q(T, S|C = 1)q(C = 1) = p(T, S) p(T, S) +Np(T )p(S) . We can observe a connection with mutual information: log q(C = 1|T, S) = − log(1 +N p(T )p(S) p(T, S) ) ≤ − log(N) + log p(T, S) p(T )p(S) . Taking expectation on both sides w.r.t p(T, S) and rearranging gives us: I(T ;S) ≥ log(N) + Eq(T,S|C=1) log q(C = 1|T, S), where I(T ;S) is the mutual information between the distributions of the teacher and student representations. Though we do not know the true distribution q(C = 1|T, S), a neural network can be used to estimate whether a pair comes from the joint distribution or the marginals. By maximizing KL divergence between the joint distribution p(T, S) and the product of marginal distributions p(T )p(S), we can maximize the mutual information between the student representations and the teacher representations.
1. What is the focus and contribution of the paper on contrastive distillation? 2. What are the strengths of the proposed approach, particularly in terms of memory efficiency and experimental results? 3. What are the weaknesses of the paper, especially regarding the lack of quantitative results on speed-up and memory cost? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposed a new contrastive distillation method that distills consistent representations with a light cost. Specifically, the proposed method uses 1) a fixed-size memory bank as the teacher dictionary to achieve the storage efficiency; 2) the moving average of the student's projection head as the teacher's projection head to produce consistent negative keys; 3) the moving average of the student model to bring q r closer to its instance-negative but class-positive keys. The authors verify the effectiveness of the method on extensive tasks and models. Strengths And Weaknesses Strength: The proposed method is well-motivated and novel. Improving the memory efficiency in contrastive learning is a valid practical concern. The experiment results on various models and tasks show clear gains, which demonstrate the effectiveness of the method. The paper provides ablation studies on all proposed components. Weakness: Can the authors provide a quantitative result on the achieved speed-up and reduce memory cost? Clarity, Quality, Novelty And Reproducibility The paper is clearly written. The method is well-motivated and novel, although a bit complicated. The method is well-supported by notable gains in the experiments.
ICLR
Title Gaussian Conditional Random Fields for Classification Abstract In this paper, a Gaussian conditional random field model for structured binary classification (GCRFBC) is proposed. The model is applicable to classification problems with undirected graphs, intractable for standard classification CRFs. The model representation of GCRFBC is extended by latent variables which yield some appealing properties. Thanks to the GCRF latent structure, the model becomes tractable, efficient, and open to improvements previously applied to GCRF regression. Two different forms of the algorithm are presented: GCRFBCb (GCRGBC Bayesian) and GCRFBCnb (GCRFBC non-Bayesian). The extended method of local variational approximation of sigmoid function is used for solving empirical Bayes in GCRFBCb variant, whereas MAP value of latent variables is the basis for learning and inference in the GCRFBCnb variant. The inference in GCRFBCb is solved by Newton-Cotes formulas for one-dimensional integration. Both models are evaluated on synthetic data and real-world data. It was shown that both models achieve better prediction performance than relevant baselines. Advantages and disadvantages of the proposed models are discussed. 1 INTRODUCTION Increased quantity and variety of sources of data with correlated outputs, so called structured data, created an opportunity for exploiting additional information between dependent outputs to achieve better prediction performance. One of the most successful probabilistic models for structured output classification problems are conditional random fields (CRF) (Sutton & McCallum, 2006). The main advantages of CRFs lie in their discriminatory nature, resulting in the relaxation of independence assumptions and the label bias problem that are present in many graphical models. Aside of many advantages, CRFs also have many drawbacks mostly resulting in high computational cost or intractability of inference and learning. A wide range of different approaches of tackling these problems has been proposed, and they motivate our work, too. One of the popular methods for structured regression based on CRFs – Gausian conditional random fields (GCRF) – has the form of multivariate Gaussian distribution (Radosavljevic et al., 2010). The main assumption of the model is that the relations between outputs are presented in quadratic form. It has convex loss function and, consequently, efficient inference and learning, and expensive sampling methods are not used. In this paper, a new model of Gaussian conditional random fields for binary classification is proposed (GCRFBC). GCRFBC builds upon regression GCRF model which is used to define latent variables over which output dependencies are defined. The model assumes that discrete outputs yi are conditionally independent conditioned on continuous latent variables zi which follow a distribution modeled by a GCRF. That way, relations between discrete outputs are not expressed directly. Two different inference and learning approaches are proposed in this paper. The first one is based on evaluating empirical Bayes by marginalizing latent variables (GCRFBCb), whereas MAP value of latent variables is the basis for learning and inference in the second model (GCRFBCnb). In order to derive GCRFBCb model and its learning procedure the variational approximation of Bayesian logistic regression (Jaakkola & Jordan, 2000) is generalized. Compared to CRFs and structured SVM classifiers, the GCRFBC models have some appealing properties: • The model is applicable to classification problems with undirected graphs, intractable for standard classification CRFs. Thanks to the GCRF latent structure, the model becomes tractable, efficient and open to improvements previously applied to GCRF regression models. • Defining correlations directly between discrete outputs may introduce unnecessary noise to the model (Tan et al., 2010). This problem can be solved by defining structured relations on a latent continuous variable space. • In case that unstructured predictors are unreliable, which is signaled by their large variance (diagonal elements in the covariance matrix), it is simple to marginalize over latent variable space and obtain better results. GCRFBC model is relying on the assumption that the underlying distribution of latent variables is multivariate normal distribution, due to that in the case when this distribution cannot be fitted well to the data (e.g. when the distribution of latent variables is multimodal) the model will not perform as well as it is expected. The proposed models are experimentally tested on both synthetic and real-world datasets in terms of predictive performance and computation time. In experiments with synthetic datasets, the results clearly indicate that the the empirical Bayes approach (GCRFBCb) better exploits output dependence structure, more so as the variance of the latent variables increases. We also tested both approaches on real-world datasets of predicting ski lift congestion, gene function classification, classification of music according to emotion and highway congestion. Both GCRFBC models outperformed ridge logistic regression, lasso logistic regression, neural network, random forest, and structured SVM classifiers, demonstrating that the proposed models can exploit output dependencies in a real-world setting. 2 RELATED WORK An extensive review of binary and multi-label classification with structured output is provided in Su (2015). A number of different studies related to graph based methods for regression can be found in the literature (Fox, 2015). CRFs were successfully applied on a variety of different structured tasks (Cotterell & Duh, 2017; Zhang et al., 2015; Masada & Bunescu, 2017; Zia et al., 2018) and different model adaptations can be found in literature Kim (2017); Maaten et al. (2011). Recently, successful unifications of deep learning and CRFs have been proposed Chen et al. (2016); Kosov et al. (2018). Moreover, implementation of deep neural networks as potential functions is presented in form of structure prediction energy networks (SPEN) Belanger & McCallum (2016); Belanger et al. (2017). Adaptation of normalazing flows in SPEN structure is presented in Lu & Huang (2019). An extensive review on topic of binary and multi-label classification with structured output is provided in Su (2015). Large number of different studies related to graph based methods for regression can be found in the literature (Fox, 2015). CRFs were successfully applied on a variety of different structured tasks, such as: low-resource named entity recognition (Cotterell & Duh, 2017), image segmentation (Zhang et al., 2015), chord recognition (Masada & Bunescu, 2017) and word segmentation (Zia et al., 2018). The mixture of CRFs capable to model data that come from multiple different sources or domains is presented in Kim (2017). The method is related to the well known hidden-unit CRF (HUCRF) (Maaten et al., 2011). The conditional likelihood and expectation minimization (EM) procedure for learning have been derived there. The mixtures of CRF models were implemented on several real-world applications resulting in prediction improvement. Recently, a model based on unification of deep learning and CRF was developed by Chen et al. (2016). The deep CRF model showed better performance compared to either shallow CRFs or deep learning methods on their own. Similarly, the combination of CRFs and deep convolutional neural networks was evaluated on an example of environmental microorganisms labeling (Kosov et al., 2018). The spatial relations among outputs were taken in consideration and experimental results have shown satisfactory results. The GCRF model was first implemented for the task of low-level computer vision (Tappen et al., 2007). Since then, various different adaptations and approximations of GCRF were proposed (Radosavljevic et al., 2014). The parameter space for the GCRF model is extended to facilitate joint modelling of positive and negative influences (Glass et al., 2016). In addition, the model is extended by bias term into link weight and solved as a part of convex optimization. Semi-supervised marginalized Gaussian conditional random fields (MGCRF) model for dealing with missing variables was proposed by Stojanovic et al. (2015). The benefits of the model were proved on partially observed data and showed better prediction performance than alternative semi-supervised structured models.A comprehensive review of continuous conditional random fields (CCRF) was provided in Radosavljevic et al. (2010). The sparse conditional random fields obtained by l1 regularization are first proposed and evaluated by Wytock & Kolter (2013). Additionaly, Frot et al. (2018) presented GCRF with the latent variable decomposition and derived convergence bounds for the estimator that is well behaved in high dimensional regime. An adaptation of GCRF on discrete output was briefly discussed in Radosavljevic (2011), as a part of future work. This discussion motivates our work, but our approach is different in technical aspects. 3 METHODOLOGY In this section we first present already known GCRF model for regression and then we propose GCRFBC model for binary classification and two approaches to inference and learning. 3.1 BACKGROUND MATERIAL GCRF is a discriminative graph-based regression model (Radosavljevic et al., 2010). Nodes of the graph are variables y = (y1, y2, . . . , yN ), which need to be predicted given a set of features x. The attributes x = (x1,x2, . . . ,xN ) interact with each node yi independently of one another, while the relations between outputs are expressed by pairwise interaction function. In order to learn parameters of the model, a training set of vectors of attributes x and real-valued response variables y are provided. The generalized form of the conditional distribution P ( y|x,α,β ) is: P ( y|x,α,β ) = 1 Z (x,α,β) exp − N∑ i=1 K∑ k=1 αk ( yi −Rk (xi) )2 −∑ i 6=j L∑ l=1 βlS l ij(yi − yj)2 (1) First sum models relations between outputs yi and corresponding input vector xi and the second one models pairwise relations between nodes. Rk(xi) represents an unstructured predictor of yi for each node in the graph and Slij is value that expresses similarity between nodes i and j in graph l. Unstructured predictor can be any regression model that gives prediction of output yi for given attributes xi. K is the total number of unstructured predictors. L is the total number of graphs (similarity functions). Graphs can express any kind of binary relations between nodes e.g., spatial and temporal correlations between outputs. Z is a partition function and vectors α and β are learnable parameters. One of the main advantages of GCRF is the ability to express different relations between outputs by variety of graphs and ability to learn which graphs are significant for prediction. The quadratic form of interaction and association potential enables conditional distribution P (y|x,α,β) to be expressed as multivariate Gaussian distribution (Radosavljevic et al., 2010): P (y|x,α,β) = 1 (2π) N 2 |Σ| 12 exp ( −1 2 (y − µ)TΣ−1(y − µ) ) (2) Precision matrix Σ−1 = 2Q and distribution mean µ = Σb are defined as, respectively: Q = {∑K k=1 αk + ∑N h=1 ∑L l=1 βlS l ih, if i = j − ∑L l=1 βlS l ij , if i 6= j (3) bi = 2 K∑ k=1 αkRk(xi) (4) Due to concavity of multivariate Gaussian distribution, the inference task argmax y P (y|x,α,β) is straightforward. The maximum posterior estimate of y is the distribution expectation µ. The objective of the learning task is to optimize parameters α and β by maximizing conditional log likelihood argmax α,β ∑ y logP (y|x,α,β). One way to ensure positive definiteness of the covariance matrix of GCRF is to require diagonal dominance (Strang et al., 1993). This can be ensured by imposing constraints that all elements of α and β be greater than 0 (Radosavljevic et al., 2010). 3.2 GCRFBC MODEL REPRESENTATION One way of adapting GCRF to classification problem is by approximating discrete outputs by suitably defining continuous outputs. Namely, GCRF can provide dependence structure over continuous variables which can be passed through sigmoid function. That way the relationship between regression GCRF and classification GCRF is similar to the relationship between linear and logistic regression, but with dependent variables. Aside from allowing us to define a classification variant of GCRF, this may result in additional appealing properties: (i) The model is applicable to classification problems with undirected graphs, intractable for standard classification CRFs. Thanks to the GCRF latent structure, the model becomes tractable, efficient and open to improvements previously applied to GCRF regression models. (ii) Defining correlations directly between discrete outputs may introduce unnecessary noise to the model (Tan et al., 2010). We avoid this problem by defining structured relations on a latent continuous variable space. (iii) In case that unstructured predictors are unreliable, which is signaled by their large variance (diagonal elements in the covariance matrix), it is simple to marginalize over latent variable space and obtain better results. It is assumed that yi are discrete binary outputs and zi are continuous latent variables assigned to each yi. Each output yi is conditionally independent of the others, given zi. The conditional probability distribution P (yi|zi) is defined as Bernoulli distribution: P (yi|zi) = Ber(yi|σ(zi)) = σ(zi)yi(1− σ(zi))1−yi (5) where σ(·) is sigmoid function. Due to conditional independence assumption, the joint distribution of outputs yi can be expressed as: P (y1, y2, . . . , yN |z) = N∏ i=1 σ(zi) yi(1− σ(zi))1−yi (6) Furthermore, the conditional distribution P (z|x) is the same as in the classical GCRF model and has canonical form defined by multivariate Gaussian distribution. Hence, joint distribution of continuous latent variables z and outputs y given x and θ = (α1, . . . , αK , β1, . . . , βL) is is the general form of the GCRFBC model defined as: P (y, z|x,θ) = N∏ i=1 σ(zi) yi(1− σ(zi))1−yi · 1 (2π)N/2 ∣∣Σ(x,θ)∣∣1/2 · exp ( −1 2 (z − µ(x, θ))TΣ−1(x,θ)(z − µ(x, θ)) ) (7) We consider two ways of inference and learning in GCRFBC model: (i) GCRFBCb - with conditional probability distribution P (y|x,θ), in which variables z are marginalized over, and (ii) GCRFBCnb - with conditional probability distribution P ( y|x,θ, µz ) , in which variables z are substituted by their expectations. 3.3 INFERENCE IN GCRFBCB MODEL Prediction of discrete outputs y for given features x and parameters θ is analytically intractable due to integration of the joint distribution P (y, z|x,θ) with respect to latent variables. However, due to conditional independence between nodes, it is possible to obtain P (yi = 1|x,θ). P (yi = 1|x,θ) = ∫ z σ(zi)P (z|x,θ)dz (8) where σ(zi) models P (yi|z). As a result of independence properties of the distribution, it holds P (yi = 1|z) = P (yi = 1|zi), and it is possible to marginalize P (z|x,θ) with respect to latent variables z′ = (z1, . . . , zi−1, zi+1, . . . , zN ): P (yi = 1|x,θ) = ∫ zi σ(zi) (∫ z′ P (z′, zi|x,θ)dz′ ) dzi (9) where ∫ z′ P (z′, zi|x,θ)dz′ is normal distribution with mean µ = µi and variance σ2i = Σii. Therefore, it holds: P (yi = 1|x,θ) = ∫ +∞ −∞ σ(zi)N (zi|µi, σ2i )dzi (10) The evaluation of P (yi = 0|x,θ) is straightforward: P (yi = 0|x,θ) = 1− P (yi = 1|x,θ). The one-dimensional integral is still analytically intractable, but can be effectively evaluated by onedimensional numerical integration. The proposed inference approach can be effectively used in case of huge number of nodes, due to low computational cost of one-dimensional numerical integration. 3.4 INFERENCE IN GCRFBCNB MODEL The inference procedure in GCRFBCnb is much simpler, because marginalization with respect to latent variables is not performed. To predict y, it is necessary to evaluate posterior maximum of latent variable zmax = argmax z P (z|x,θ), which is straightforward due to normal form of GCRF. Therefore, it holds zmax = µz,i. The conditional distribution P (yi = 1|x,µz,i,θ), where µz,i is expectation of latent variable zi, can be expressed as: P (yi = 1|x,µz,θ) = σ(µz,i) = 1 1 + exp(−µz,i) (11) 3.5 LEARNING IN GCRFBCB MODEL In comparison with inference, learning procedure is more complicated. Evaluation of the conditional log likelihood is intractable, since latent variables cannot be analytically marginalized. The conditional log likelihood is expressed as: L ( Y |X,θ ) = log ∫ Z P (Y,Z|X,θ)dZ = M∑ j=1 log ∫ zj P (yj, zj |xj ,θ)dzj = M∑ j=1 Lj(yj |xj ,θ) (12) Lj(yj |xj ,θ) = log ∫ zj N∏ i=1 σ(zji) yji(1− σ(zji))1−yji exp(− 12 (zj − µj) TΣ−1j (zj − µj)) (2π)N/2 ∣∣Σj∣∣1/2 dzj (13) where Y ∈ RM×N is complete dataset of outputs, X ∈ RM×N×A is complete dataset of features, M is the total number of instances and A is the total number of features. Please note that each instance is structured, so while different instances are independent of each other, variables within one instance are dependent. One way to approximate integral in conditional log likelihood is by local variational approximation. Jaakkola & Jordan (2000) derived lower bound for sigmoid function, which can be expressed as: σ(x) > σ(ξ) exp{(x− ξ)/2− λ(ξ)(x2 − ξ2)} (14) where λ(ξ) = − 12ξ · [ σ(ξ)− 12 ] and ξ is a variational parameter. The Eq. 14 is called ξ transformation of sigmoid function and it yields maximum value when ξ = x. This approximation can be applied to the model defined by Eq. 13, but the variational approximation has to be further extended because of the product of sigmoid functions, such that: P (yj , zj |xj ,θ) = P (yj |zj)P (zj |xj ,θ) ≥ P (yj , zj |xj ,θ, ξj) (15) P (yj , zj |xj ,θ, ξj) = N∏ i=1 σ(ξji) exp ( zjiyji − zji + ξji 2 − λ(ξji)(z2ji − ξ2ji) ) · 1 (2π)N/2 ∣∣Σj∣∣1/2 exp ( −1 2 (zj − µj)TΣ−1j (zj − µj) ) (16) The Eq. 16 can be arranged in the form suitable for integration. Detailed derivation of lower bound of conditional log likelihood is presented in Appendix A. The lower bound of conditional log likelihood L(yj |xj ,θ, ξj) is defined as: Lj(yj |xj ,θ, ξj) = logP (yj |xj ,θ, ξj) = N∑ i=1 ( log σ(ξji)− ξji 2 + λ(ξji)ξ 2 ji ) − 1 2 µTj Σ −1 j µj + 1 2 mTj S −1 j mj + 1 2 log |Sj | (17) where: S−1j = Σ −1 j + 2Λj mj = Σj ( (yj − 1 2 I) + Σ−1j µj ) (18) Λj = λ(ξj1) 0 0 . . . 0 0 λ(ξj2) 0 . . . 0 ... ... ... . . . ... 0 0 0 . . . λ(ξjN ) (19) GCRFBCb uses the derivative of conditional log likelihood in order to find the optimal values for parameters α, β and matrix of variational parameters ξ ∈ RM×N . In order to ensure positive definiteness of normal distribution involved, it is sufficient to constrain parameteres α > 0 and β > 0. The partial derivatives of lower bound of conditional log likelihood are presented in Appendix B. For constrained optimization, the truncated Newton algorithm was used Nocedal & Wright (2006); Facchinei et al. (2002). The target function is not convex, so finding a global optimum cannot be guaranteed. 3.6 LEARNING IN GCRFBCNB MODEL In GCRFBCnb the mode of posterior distribution of continuous latent variable z is evaluated directly, so there is no need for approximation. The conditional log likelihood can be expressed as: L ( Y |X,θ,µ ) = logP (Y |X,θ,µ) = M∑ j=1 N∑ i=1 logP (yji|xj ,θ, µji) = M∑ j=1 N∑ i=1 Lji(yji|xj ,θ, µji) (20) Lji(yji|xj ,θ, µji) = yji log σ(µji) + (1− yji) log ( 1− σ(µji) ) (21) The partial derivatives of conditional log likelihood are presented in Appendix C. 4 EXPERIMENTAL EVALUATION Both proposed models were tested and compared on synthetic data and real-world tasks.1 All compared classifiers were compared in terms of the area under ROC curve (AUC) and accuracy 2 (ACC). Moreover, the lower bound (in case of GCRFBCb) of conditional log likelihood L ( Y |X,θ,µ ) and actual value (in case of GCRFBCnb) of conditional log likelihood L ( Y |X,θ ) of obtained values on synthetic test dataset were also reported. 4.1 SYNTHETIC DATASET The main goal of experiments on synthetic datasets was to examine models under various controlled conditions, and show advantages and disadvantages of each. In all experiments on synthetic datasets two different graphs were used (hence β ∈ R2) and two unstructured predictors (hence α ∈ R2). The results of experiments on synthetic datasets are presented in Appendix D. It can be noticed, that in cases where norm of the variances of latent variables is small, both models have equal performance considering AUC and conditional log likelihood L ( Y |X,θ ) . This is the case when values of parameters α used in data generating process are greater or equal to the 1Implementation can be found at https://github.com/andrijaster/GCRFBC B NB 2PyStruct package does not have option of returning SSVM and CRF confidence values for AUC evaluation values of parameters β. This means that the information provided by unstructured predictors is more important for classifications task than the information provided by output structure. Therefore, conditional distribution P (y, z|x,θ) is concentrated around mean value and MAP estimate is a satisfactory approximation. However, when data is generated from distribution with significantly higher values of β than α, the GCRFBCb performs significantly better than GCRFBCnb. For the larger values of variance norm, this difference is also large. This means that the structure between outputs has significant contribution to solving the classification task. It can be concluded that GCRFBCb has at least equal prediction performance as GCRFBCnb. Also, it can be argued that the models were generally able to utilize most of the information (from both features and the structure between outputs), which can be seen through AUC values. In addition, distribution of local variational parameters were analyzed during learning. It is noticed that in each epoch, the variance of this distribution is small and that the parameters can be clustered and their number significantly reduced. Therefore, it is possible to significantly lower down computational and memory costs of GCRFBCb learning procedure, but that’s out of the scope of this paper. 4.2 PERFORMANCE ON REAL-WORLD DATASETS 4.2.1 SKI LIFTS CONGESTION Data used in this research includes information on ski lift gate entrances in Kopaonik ski resorts, for the period March 15 to March 30 for the seasons from 2006 to 2011. The goal is to predict occurrence of crowding on ski lifts for 40 minutes in advance. Total number of instances in dataset was 4,850 for each ski lift, which is 33,950 in total. Relatively simple method for crowding detection was devised for labelling data. We assume that, if the crowding at some gate occurs, distributions of skiing times from other gates to that gate within some time window get shifted towards larger values. We model probability distribution of skiing time between two gates by the well-known parametric method of kernel density estimation (KDE) (Silverman, 2018). The distribution shift is measured with respect to the mode of the distribution. The dataset is generated by observing shifts in time windows of 5 minutes. When the mode of the distribution of skiing times within that window is greater than the mode for the whole time-span, the instance is labeled by 1 (crowding) and otherwise, it is labeled by 0 (no crowding). In order to obtain more information from the data distribution, additional 18 features were extracted. Four different unstructured predictors that were trained on each class separately were used: ridge logistic regression, LASSO logistic regression, neural network and random forest, whereas additional two unstructured predictors: decision tree and neural network were trained on all nodes together. Additionally, three structural support vector machine and two CRFs classifiers were used (Müller & Behnke, 2014). Fully connected graph of SSVM and CRF models are defined as SSVM-full and CRF-full, whereas Chow-Liu tree method for specifying edge connections are defined as SSVMtree and CRF-tree, respectively. In the SSVM-independent model the nodes of the graph are not connected. Six different weighted graphs were used to capture dependence structure between ski lifts (nodes):χ2 statistics on labels of training set, mutual information between labels, correlation matrix between outputs of over-fitted neural networks, norm of difference between vectors of labels and two graphs were defined based on difference of vectors of historical labels and on differences of historical averages of skier times. The AUC score and ACC of structured and unstructured predictors, along with the total computational time are shown in Table 1. It can be observed that GCRFBCb and GCRBCnb outperformed unstructured and other structured predictors in all cases. Based on evaluated parameters it could be concluded that dependence structure has significant impact on overall prediction performance, even though, due to low values of norm of variance, GCRFBCb and GCRFBCnb have equal AUC scores. It can be summarized that advantages of structured models compared to unstructured are obvious, but in this particular task due to equal prediction performance and its lower computational and memory complexity, GCRFBCnb is the best choice for this specific application. 4.2.2 MULTI-LABEL CLASSIFICATION OF MUSIC ACCORDING TO EMOTION The dataset used for this work consists of 100 songs from 7 different genres. The collection was created from 233 musical albums choosing three songs from each album. 8 rhythmic and 64 timbre features are extracted. The music is labeled in 6 categories of emotions: amazed-surprised, happypleased, relaxing-calm, quiet-still, sad-lonely and angry-fearful (Trohidis et al., 2008). Total number of instances in dataset was 593. Four different weighted graphs were used: statistics on labels of training set, mutual information between labels, correlation matrix between outputs of over-fitted neural networks and norm of difference between vectors of labels. Same unstructured predictors as in ski lift congestion problem were used, along with three structural support vector machine classifiers. The performances of models are evaluated by 10 fold cross validation. The AUC score and ACC of structured and unstructured predictors, along with the total computational time are shown in Table 2. It can be seen that GCRFBCb has achieved the best prediction performances. The ACC of GCRFBC models are significantly better than the SSVM performances. The AUC score and ACC of GCRGBCb are higher than the best result (AUC = 0.8237) presented in original paper (Trohidis et al., 2008). As in previous cases, computational time of GCRFBCb is significantly longer compared to GCRFBCnb and SSVM models. 4.2.3 GENE FUNCTION CLASSIFICATION This dataset is formed by micro-array expression data and phylogenetic profiles with 2417 genes (instances). The number of features is 103, whereas each gene is associated with the set of 14 groups (Elisseeff & Weston, 2002). The same unstructured, structured predictors and weighted graphs, as in music according to emotion classification, were used. The 10-fold cross validation results of the classification are shown in Table 3. It can be observed that both GCRFBCb and GCRFBCnb achieved significantly better results in comparison with unstructured predictors. However, neural network trained on all data together achieved the same ACC scores as GCRFBCb. The AUC of GCRFBCb has outperformed Random forest classifier by 19%, whereas SSVM - tree has better ACC compared to GCRFBCnb. It also outperformed GCRFCnb, but as expected, its computation time was longer. In addition, the computation time of CRFs models are longer compared to GCRFBCb 4.2.4 HIGHWAY CONGESTION The E70-E75 motorway is a major transit motorway in Serbia. With 504 kilometers, it is the one the major transit motorway in Serbia. It crosses the country from north-west to south, starting at Batrovci border crossing with the Republic of Croatia and ending with Preševo border crossing with the Republic of North Macedonia. One of the biggest problems in E70-E75 motorway is high congestion that frequently occurs. One of the reasons lies in lack of open toll stations. In order to mitigate congestion problem, it is necessary to predict its occurrence and open enough toll stations. Data used in this research includes information of car entrance and exit for the year 2017. Two different sections were analyzed: Belgrade - Adaševci and Niš - Belgrade. The section Belgrade - Adaševci was analyzed for the period of January 2017, whereas section Niš - Belgrade was analyzed for the period of April - July 2017. The congestion was labeled using the similar technique based on KDE as presented in the ski lifts congestion problem. Based on raw datasets for sections Niš - Belgrade and Belgrade - Adaševci with 5,132,918 and 487,767 instances, respectively, a new dataset for section Niš - Belgrade is generated by observing shifts in time windows of 10 minutes due to large number of vehicles, whereas in the case of section Belgrade - Adaševci the shifts are observed in time windows of 20 minutes. Total numbers of instances for sections Belgrade - Adaševci and Niš - Belgrade are 50,964 and 235,872, whereas numbers of highway exits (outputs) are 6 and 18, respectively. The extracted features are similar to the ones presented in ski congestion problem. The χ2 statistics, mutual information, correlation matrix and difference of vectors of historical labels were used to capture dependence structure, whereas the same unstructured predictors as in ski lifts congestion problem were evaluated. The classification results, validated by 10 fold cross validation, are presented in Table 4. The GCRFBCnb achieved the highest AUC and ACC scores in the section Belgrade - Adaševci, whereas GCRFBCb has better prediction performance in section Niš - Belgrade. Moreover, in case of section Niš - Belgrade, GCRFBCb has worse ACC score than fully connected CRF, whereas CRF-tree outperformed GCRFBCnb in section Belgrade - Adaševci 5 CONCLUSION In this paper, a new model, called Gaussian Conditional Random Fields for Binary Classification (GCRFBC) is presented. The model is based on latent GCRF structure, which means that intractable structured classification problem can become tractable and efficiently solved. Moreover, the improvements previously applied to regression GCRF can be easily extended to GCRFBC. Two different variants of GCRFBC were derived: GCRFBCb and GCRFBCnb. Empirical Bayes (marginalization of latent variables) by local variational methods is used in optimization procedure of GCFRBCb, whereas MAP estimate of latent variables is applied in GCRFBCnb. Based on presented methodology and obtained experimental results on synthetic and real-world datasets it can be concluded that both GCRFBCb and GCRFBCnb models have better prediction performance compared to the analysed structured unstructured predictors. Additionaly, GCRFBCb has better performance considering AUC score, ACC and lower bound of conditional log likelihood L ( Y |X,θ ) compared to GCRFBCnb, in cases where norm of the variances of latent variables is high. However, in cases where norm of the variances is close to zero, both models have equal prediction performance. Due to high memory and computational complexity of GCRFBCb compared to GCRFBCnb, in cases where norm of the variances is close to zero, it is reasonable to use GCRFBCnb. Additionally, the trade off between complexity and accuracy can be made in situation where norm of the variances is high. Further studies should address extending GCRFBC to structured multi-label classification problems, and lower computational complexity of GCRFBCb by considering efficient approximations. A DERIVATION OF LOWER BOUND OF CONDITIONAL LIKELIHOOD In this section we derive lower bound of conditional likelihood. In order to obtain suitable form of joint distribution that can be easily integrated, the lower bound for sigmoid function was used (Jaakkola & Jordan, 2000). The lower bound of joint distribution P (yj , zj |xj ,θ) can be expressed as: P (yj , zj |xj ,θ) = P (yj |zj)P (zj |xj ,θ) ≥ P (yj , zj |xj ,θ, ξj) (22) P (yj , zj |xj ,θ, ξj) = N∏ i=1 σ(ξji) exp ( zjiyji − zji + ξji 2 − λ(ξji)(z2ji − ξ2ji) ) · 1 (2π)N/2 ∣∣Σj∣∣1/2 exp ( −1 2 (zj − µj)TΣ−1j (zj − µj) ) (23) The simplified form of Eq. 23 can be represented by rearranging terms in the following form: P (yj , zj |xj ,θ, ξj) = T (ξj) exp ( zTj (yj − 1 2 I)− λzTj zj − 1 2 zTj Σ −1 j zj + z T j Σ −1 j µ ) (24) T (ξj) = 1 (2π)N/2 ∣∣Σj∣∣1/2 N∏ i=1 σ(ξji) exp ( −1 2 µTj Σ −1 j µj − ξji 2 + λ(ξji)ξ 2 ji ) (25) The lower bound of likelihood P (yj |xj ,θ, ξj) can be obtained by marginalization of zj as: P (yj |xj ,θ, ξj) = ∫ P (yj , zj |xj ,θ, ξj)dzj = T (ξj) ∫ exp ( zTj (yj − 1 2 I)− ΛjzTj zj − 1 2 zTj Σ −1 j zj + z T j Σ −1 j µj ) dzj = T (ξj) ∫ exp ( −1 2 zTj (Σ −1 j + 2Λj)zj+ zTj (Σ −1 j + 2Λj)(Σ −1 j + 2Λj) −1((yj − 1 2 I) + Σ−1j µj) ) dzj (26) The lower bound of likelihood P (yj |xj ,θ, ξj) can be transformed in the following form: P (yj |xj ,θ, ξj) = T (ξj) ∫ exp ( −1 2 (zj −mj)TS−1j (zj −mj) + 1 2 mTj S −1 j mj ) dzj = T (ξj) exp ( 1 2 mTj S −1 j mj )∫ exp ( −1 2 (zj −mj)TS−1j (zj −mj) ) dzj (27) where S−1j = Σ −1 j + 2Λj andmj = Σj ( (yj − 12I) + Σ −1 j µj ) . This integration is easily performed by noting that it is the integral over an unnormalized Gaussian distribution, which yields: P (yj |xj ,θ, ξj) = (2π)N/2 ∣∣Σj∣∣1/2 T (ξj) exp(1 2 mTj S −1mj ) |Sj |1/2 (28) The final form of the lower bound of conditional log likelihood Lj(yj |xj ,θ, ξj) is: Lj(yj |xj ,θ, ξj) = logP (yj |xj ,θ, ξj) = N∑ i=1 ( log σ(ξji)− ξji 2 + λ(ξji)ξ 2 ji ) − 1 2 µTj Σ −1 j µj + 1 2 mTj S −1 j mj + 1 2 log |Sj | (29) B PARTIAL DERIVATIVE OF LOWER BOUND OF CONDITIONAL LOG LIKELIHOOD The partial derivative of lower bound of conditional log likelihood (GCRFBCb) ∂Lj(yj |xj ,θ,ξj) ∂αk is computed as: ∂Lj(yj |xj ,θ, ξj) ∂αk =− 1 2 Tr ( Sj ∂S−1j ∂αk ) + ∂mTj ∂αk S−1j mj + 1 2 mTj ∂S−1j ∂αk mj − µTj ∂αk Σ−1j µj − 1 2 µTj ∂Σ−1j ∂αk + 1 2 Tr ( Σj ∂Σ−1j ∂αk ) (30) where: ∂S−1j ∂αk = ∂Σ−1j ∂αk = { 2, if i = j 0, if i 6= j (31) ∂mTj ∂αk = − ( yj − 1 2 I + µTj Σ −1 j ) Sj ∂S−1j ∂αk Sj + ∂µTj ∂αk Σ−1j Sj + µ T j ∂Σ−1j αk Sj (32) ∂µTj ∂αk = ( 2αkRk(x)− ∂Σ−1j ∂αk µj )T ΣTj (33) Similarly partial derivatives with respect to β can be defined as: ∂Lj(yj |xj ,θ, ξj) ∂βl =− 1 2 Tr ( Sj ∂S−1j ∂βl ) + ∂mTj ∂βl S−1j mj + 1 2 mTj ∂S−1j ∂βl mj − µTj ∂βl Σ−1j µj − 1 2 µTj ∂Σ−1j ∂βl + 1 2 Tr ( Σj ∂Σ−1j ∂βl ) (34) where: ∂S−1j ∂βl = ∂Σ−1j ∂βl = {∑N n=1 e l inS l in(x), if i = j −elijSlij(x), if i 6= j (35) ∂mTj ∂βl = − ( yj − 1 2 I + µTj Σ −1 j ) Sj ∂S−1j ∂βl Sj + ∂µTj ∂βl Σ−1j Sj + µ T j ∂Σ−1j βl Sj (36) ∂µTj ∂βl = ( − ∂Σ−1j ∂βl µj )T ΣTj (37) In the same manner partial derivatives of conditional log likelihood with respect to ξji are: ∂Lj(yj |xj ,θ, ξj) ∂ξji = −1 2 Tr ( 2Sj ∂Λj ∂ξji ) − [ 2 ( yj − 1 2 I ) Sj ∂Λj ∂ξji Sj ] S−1j mj +mTj ∂Λj ∂ξji mj + N∑ i=1 ( 1 σ(ξji) + 1 2 ξji ) ∂σ(ξji) ∂ξji + 1 2 ( σ(ξji)− 3 4 ) (38) where: ∂Λj ∂ξji = 0 0 0 . . . 0 ... . . . ... . . . ... 0 0 ∂λ(ξji) ∂ξji . . . 0 ... ... ... . . . ... 0 0 0 . . . 0 (39) ∂σ(ξji) ∂ξij = σ(ξji)(1− σ(ξji)) (40) ∂λ(ξji) ∂ξji = 1 2ξji ∂σ(ξji) ∂ξji − 1 2 ( σ(ξji)− 1 2 ) 1 ξ2ji (41) C PARTIAL DERIVATIVE OF CONDITIONAL LOG LIKELIHOOD The derivatives of the conditional log likelihood (GCRFBCnb) with respect to α and β are defined as, respectively: ∂Lji(yji|xj ,θ, µji) ∂αk = ( yji − σ(µji) ) ∂µji ∂αk (42) ∂Lji(yji|xj ,θ, µji) ∂αl = ( yji − σ(µji) ) ∂µji ∂βl (43) where ∂µji∂αk and ∂µji ∂βl are elements of the vectors ∂µj∂αk and ∂µj ∂βl and can be obtained by Eqs. 33 and 37, respectively. D SYNTHETIC DATASET RESULTS In order to generate and label graph nodes, edge weights S and unstructured predictor valuesR were randomly generated from uniform distribution. Besides, it was necessary to choose values of parameters α and β. Greater values of α indicate that the model is more confident about performance of unstructured predictors, whereas for the larger value of β the model is putting more emphasis on the dependence structure of output variables. Six different values of parametersα and β were used. In the first groupα and β have similar values, so unstructured predictors and dependence structure between outputs have similar importance. In the second group, α has higher values compared to β, which means that unstructured predictors are more important than the dependence structure. In the third group β has higher values than α, meaning that dependence structure is more important than unstructured predictors. Along with the AUC and conditional log likelihood, norm of the variances of latent variables (diagonal elements in the covariance matrix) is evaluated and presented in Table 5. In addition, the results of experiments are presented in Fig. 1, where for different values of α and β we show differences between GCRFBCb and GCRFBCnb (a) AUC scores, (b) log likelihoods, and (c) norm of the variances of latent variables.
1. What is the main contribution of the paper, and how does it differ from previous work? 2. How effective is the proposed approach compared to other methods, and what are the limitations? 3. Are there any concerns regarding the experimental setup and comparisons with other models? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What are some suggestions for improving the paper and making it stronger?
Review
Review The work involves modifiying gaussian conditional random fields to work for classification problems instead of regression problems. The main idea is to apply a bernoulli distribution on top of the regression values to convert them to work with binary classification problems. Two variations are discussed along with the inference and learning methodology. The inference can be done using numerical approximation and learning using variational methods and is still untracktable. Comparisons with other modeling strategies is done using experiments. The paper is incremental and doesn't really provide improvements to learning parameters (or at least there is no theory showing this in the paper). The experiments do not seem satisfactory as discussed below. a) Applying a bernoulli distribution on the output of the GCRF seems trivial. It is not very clear when the GCRFBCb model would be better than the GCRFBCnb. The learning procedure is untracktable and hard to follow on why this might provide better results. b) The datasets (music classification and gene classification) don't seem to be good datasets for structured predictions i.e. the interaction needed between the nodes is not clear. Since they are multilabel problems, one could have just modeled the system with N independent nodes or design a multinomial distribution instead of only for binary classification. c) There should be more thorough fine-tuning of other models, for e.g. in the ski lifts experiment, the CRF does much worse than logistic regression in the results. This is most likely because the parameters were not initialized properly using normal tricks like using logistic regression. Typically for truly structured problems, CRFs do better than their logistic regression counter parts. It is also not clear how the other models (CRF and SSVM) pairwise potentials were modeled. It would really help to make this paper stronger by showing the new modeling technique does better than CRFs (that are tuned properly) on better structured datasets. It would be good to have a discussion on when this model would do worse than the other structured models and why.
ICLR
Title Gaussian Conditional Random Fields for Classification Abstract In this paper, a Gaussian conditional random field model for structured binary classification (GCRFBC) is proposed. The model is applicable to classification problems with undirected graphs, intractable for standard classification CRFs. The model representation of GCRFBC is extended by latent variables which yield some appealing properties. Thanks to the GCRF latent structure, the model becomes tractable, efficient, and open to improvements previously applied to GCRF regression. Two different forms of the algorithm are presented: GCRFBCb (GCRGBC Bayesian) and GCRFBCnb (GCRFBC non-Bayesian). The extended method of local variational approximation of sigmoid function is used for solving empirical Bayes in GCRFBCb variant, whereas MAP value of latent variables is the basis for learning and inference in the GCRFBCnb variant. The inference in GCRFBCb is solved by Newton-Cotes formulas for one-dimensional integration. Both models are evaluated on synthetic data and real-world data. It was shown that both models achieve better prediction performance than relevant baselines. Advantages and disadvantages of the proposed models are discussed. 1 INTRODUCTION Increased quantity and variety of sources of data with correlated outputs, so called structured data, created an opportunity for exploiting additional information between dependent outputs to achieve better prediction performance. One of the most successful probabilistic models for structured output classification problems are conditional random fields (CRF) (Sutton & McCallum, 2006). The main advantages of CRFs lie in their discriminatory nature, resulting in the relaxation of independence assumptions and the label bias problem that are present in many graphical models. Aside of many advantages, CRFs also have many drawbacks mostly resulting in high computational cost or intractability of inference and learning. A wide range of different approaches of tackling these problems has been proposed, and they motivate our work, too. One of the popular methods for structured regression based on CRFs – Gausian conditional random fields (GCRF) – has the form of multivariate Gaussian distribution (Radosavljevic et al., 2010). The main assumption of the model is that the relations between outputs are presented in quadratic form. It has convex loss function and, consequently, efficient inference and learning, and expensive sampling methods are not used. In this paper, a new model of Gaussian conditional random fields for binary classification is proposed (GCRFBC). GCRFBC builds upon regression GCRF model which is used to define latent variables over which output dependencies are defined. The model assumes that discrete outputs yi are conditionally independent conditioned on continuous latent variables zi which follow a distribution modeled by a GCRF. That way, relations between discrete outputs are not expressed directly. Two different inference and learning approaches are proposed in this paper. The first one is based on evaluating empirical Bayes by marginalizing latent variables (GCRFBCb), whereas MAP value of latent variables is the basis for learning and inference in the second model (GCRFBCnb). In order to derive GCRFBCb model and its learning procedure the variational approximation of Bayesian logistic regression (Jaakkola & Jordan, 2000) is generalized. Compared to CRFs and structured SVM classifiers, the GCRFBC models have some appealing properties: • The model is applicable to classification problems with undirected graphs, intractable for standard classification CRFs. Thanks to the GCRF latent structure, the model becomes tractable, efficient and open to improvements previously applied to GCRF regression models. • Defining correlations directly between discrete outputs may introduce unnecessary noise to the model (Tan et al., 2010). This problem can be solved by defining structured relations on a latent continuous variable space. • In case that unstructured predictors are unreliable, which is signaled by their large variance (diagonal elements in the covariance matrix), it is simple to marginalize over latent variable space and obtain better results. GCRFBC model is relying on the assumption that the underlying distribution of latent variables is multivariate normal distribution, due to that in the case when this distribution cannot be fitted well to the data (e.g. when the distribution of latent variables is multimodal) the model will not perform as well as it is expected. The proposed models are experimentally tested on both synthetic and real-world datasets in terms of predictive performance and computation time. In experiments with synthetic datasets, the results clearly indicate that the the empirical Bayes approach (GCRFBCb) better exploits output dependence structure, more so as the variance of the latent variables increases. We also tested both approaches on real-world datasets of predicting ski lift congestion, gene function classification, classification of music according to emotion and highway congestion. Both GCRFBC models outperformed ridge logistic regression, lasso logistic regression, neural network, random forest, and structured SVM classifiers, demonstrating that the proposed models can exploit output dependencies in a real-world setting. 2 RELATED WORK An extensive review of binary and multi-label classification with structured output is provided in Su (2015). A number of different studies related to graph based methods for regression can be found in the literature (Fox, 2015). CRFs were successfully applied on a variety of different structured tasks (Cotterell & Duh, 2017; Zhang et al., 2015; Masada & Bunescu, 2017; Zia et al., 2018) and different model adaptations can be found in literature Kim (2017); Maaten et al. (2011). Recently, successful unifications of deep learning and CRFs have been proposed Chen et al. (2016); Kosov et al. (2018). Moreover, implementation of deep neural networks as potential functions is presented in form of structure prediction energy networks (SPEN) Belanger & McCallum (2016); Belanger et al. (2017). Adaptation of normalazing flows in SPEN structure is presented in Lu & Huang (2019). An extensive review on topic of binary and multi-label classification with structured output is provided in Su (2015). Large number of different studies related to graph based methods for regression can be found in the literature (Fox, 2015). CRFs were successfully applied on a variety of different structured tasks, such as: low-resource named entity recognition (Cotterell & Duh, 2017), image segmentation (Zhang et al., 2015), chord recognition (Masada & Bunescu, 2017) and word segmentation (Zia et al., 2018). The mixture of CRFs capable to model data that come from multiple different sources or domains is presented in Kim (2017). The method is related to the well known hidden-unit CRF (HUCRF) (Maaten et al., 2011). The conditional likelihood and expectation minimization (EM) procedure for learning have been derived there. The mixtures of CRF models were implemented on several real-world applications resulting in prediction improvement. Recently, a model based on unification of deep learning and CRF was developed by Chen et al. (2016). The deep CRF model showed better performance compared to either shallow CRFs or deep learning methods on their own. Similarly, the combination of CRFs and deep convolutional neural networks was evaluated on an example of environmental microorganisms labeling (Kosov et al., 2018). The spatial relations among outputs were taken in consideration and experimental results have shown satisfactory results. The GCRF model was first implemented for the task of low-level computer vision (Tappen et al., 2007). Since then, various different adaptations and approximations of GCRF were proposed (Radosavljevic et al., 2014). The parameter space for the GCRF model is extended to facilitate joint modelling of positive and negative influences (Glass et al., 2016). In addition, the model is extended by bias term into link weight and solved as a part of convex optimization. Semi-supervised marginalized Gaussian conditional random fields (MGCRF) model for dealing with missing variables was proposed by Stojanovic et al. (2015). The benefits of the model were proved on partially observed data and showed better prediction performance than alternative semi-supervised structured models.A comprehensive review of continuous conditional random fields (CCRF) was provided in Radosavljevic et al. (2010). The sparse conditional random fields obtained by l1 regularization are first proposed and evaluated by Wytock & Kolter (2013). Additionaly, Frot et al. (2018) presented GCRF with the latent variable decomposition and derived convergence bounds for the estimator that is well behaved in high dimensional regime. An adaptation of GCRF on discrete output was briefly discussed in Radosavljevic (2011), as a part of future work. This discussion motivates our work, but our approach is different in technical aspects. 3 METHODOLOGY In this section we first present already known GCRF model for regression and then we propose GCRFBC model for binary classification and two approaches to inference and learning. 3.1 BACKGROUND MATERIAL GCRF is a discriminative graph-based regression model (Radosavljevic et al., 2010). Nodes of the graph are variables y = (y1, y2, . . . , yN ), which need to be predicted given a set of features x. The attributes x = (x1,x2, . . . ,xN ) interact with each node yi independently of one another, while the relations between outputs are expressed by pairwise interaction function. In order to learn parameters of the model, a training set of vectors of attributes x and real-valued response variables y are provided. The generalized form of the conditional distribution P ( y|x,α,β ) is: P ( y|x,α,β ) = 1 Z (x,α,β) exp − N∑ i=1 K∑ k=1 αk ( yi −Rk (xi) )2 −∑ i 6=j L∑ l=1 βlS l ij(yi − yj)2 (1) First sum models relations between outputs yi and corresponding input vector xi and the second one models pairwise relations between nodes. Rk(xi) represents an unstructured predictor of yi for each node in the graph and Slij is value that expresses similarity between nodes i and j in graph l. Unstructured predictor can be any regression model that gives prediction of output yi for given attributes xi. K is the total number of unstructured predictors. L is the total number of graphs (similarity functions). Graphs can express any kind of binary relations between nodes e.g., spatial and temporal correlations between outputs. Z is a partition function and vectors α and β are learnable parameters. One of the main advantages of GCRF is the ability to express different relations between outputs by variety of graphs and ability to learn which graphs are significant for prediction. The quadratic form of interaction and association potential enables conditional distribution P (y|x,α,β) to be expressed as multivariate Gaussian distribution (Radosavljevic et al., 2010): P (y|x,α,β) = 1 (2π) N 2 |Σ| 12 exp ( −1 2 (y − µ)TΣ−1(y − µ) ) (2) Precision matrix Σ−1 = 2Q and distribution mean µ = Σb are defined as, respectively: Q = {∑K k=1 αk + ∑N h=1 ∑L l=1 βlS l ih, if i = j − ∑L l=1 βlS l ij , if i 6= j (3) bi = 2 K∑ k=1 αkRk(xi) (4) Due to concavity of multivariate Gaussian distribution, the inference task argmax y P (y|x,α,β) is straightforward. The maximum posterior estimate of y is the distribution expectation µ. The objective of the learning task is to optimize parameters α and β by maximizing conditional log likelihood argmax α,β ∑ y logP (y|x,α,β). One way to ensure positive definiteness of the covariance matrix of GCRF is to require diagonal dominance (Strang et al., 1993). This can be ensured by imposing constraints that all elements of α and β be greater than 0 (Radosavljevic et al., 2010). 3.2 GCRFBC MODEL REPRESENTATION One way of adapting GCRF to classification problem is by approximating discrete outputs by suitably defining continuous outputs. Namely, GCRF can provide dependence structure over continuous variables which can be passed through sigmoid function. That way the relationship between regression GCRF and classification GCRF is similar to the relationship between linear and logistic regression, but with dependent variables. Aside from allowing us to define a classification variant of GCRF, this may result in additional appealing properties: (i) The model is applicable to classification problems with undirected graphs, intractable for standard classification CRFs. Thanks to the GCRF latent structure, the model becomes tractable, efficient and open to improvements previously applied to GCRF regression models. (ii) Defining correlations directly between discrete outputs may introduce unnecessary noise to the model (Tan et al., 2010). We avoid this problem by defining structured relations on a latent continuous variable space. (iii) In case that unstructured predictors are unreliable, which is signaled by their large variance (diagonal elements in the covariance matrix), it is simple to marginalize over latent variable space and obtain better results. It is assumed that yi are discrete binary outputs and zi are continuous latent variables assigned to each yi. Each output yi is conditionally independent of the others, given zi. The conditional probability distribution P (yi|zi) is defined as Bernoulli distribution: P (yi|zi) = Ber(yi|σ(zi)) = σ(zi)yi(1− σ(zi))1−yi (5) where σ(·) is sigmoid function. Due to conditional independence assumption, the joint distribution of outputs yi can be expressed as: P (y1, y2, . . . , yN |z) = N∏ i=1 σ(zi) yi(1− σ(zi))1−yi (6) Furthermore, the conditional distribution P (z|x) is the same as in the classical GCRF model and has canonical form defined by multivariate Gaussian distribution. Hence, joint distribution of continuous latent variables z and outputs y given x and θ = (α1, . . . , αK , β1, . . . , βL) is is the general form of the GCRFBC model defined as: P (y, z|x,θ) = N∏ i=1 σ(zi) yi(1− σ(zi))1−yi · 1 (2π)N/2 ∣∣Σ(x,θ)∣∣1/2 · exp ( −1 2 (z − µ(x, θ))TΣ−1(x,θ)(z − µ(x, θ)) ) (7) We consider two ways of inference and learning in GCRFBC model: (i) GCRFBCb - with conditional probability distribution P (y|x,θ), in which variables z are marginalized over, and (ii) GCRFBCnb - with conditional probability distribution P ( y|x,θ, µz ) , in which variables z are substituted by their expectations. 3.3 INFERENCE IN GCRFBCB MODEL Prediction of discrete outputs y for given features x and parameters θ is analytically intractable due to integration of the joint distribution P (y, z|x,θ) with respect to latent variables. However, due to conditional independence between nodes, it is possible to obtain P (yi = 1|x,θ). P (yi = 1|x,θ) = ∫ z σ(zi)P (z|x,θ)dz (8) where σ(zi) models P (yi|z). As a result of independence properties of the distribution, it holds P (yi = 1|z) = P (yi = 1|zi), and it is possible to marginalize P (z|x,θ) with respect to latent variables z′ = (z1, . . . , zi−1, zi+1, . . . , zN ): P (yi = 1|x,θ) = ∫ zi σ(zi) (∫ z′ P (z′, zi|x,θ)dz′ ) dzi (9) where ∫ z′ P (z′, zi|x,θ)dz′ is normal distribution with mean µ = µi and variance σ2i = Σii. Therefore, it holds: P (yi = 1|x,θ) = ∫ +∞ −∞ σ(zi)N (zi|µi, σ2i )dzi (10) The evaluation of P (yi = 0|x,θ) is straightforward: P (yi = 0|x,θ) = 1− P (yi = 1|x,θ). The one-dimensional integral is still analytically intractable, but can be effectively evaluated by onedimensional numerical integration. The proposed inference approach can be effectively used in case of huge number of nodes, due to low computational cost of one-dimensional numerical integration. 3.4 INFERENCE IN GCRFBCNB MODEL The inference procedure in GCRFBCnb is much simpler, because marginalization with respect to latent variables is not performed. To predict y, it is necessary to evaluate posterior maximum of latent variable zmax = argmax z P (z|x,θ), which is straightforward due to normal form of GCRF. Therefore, it holds zmax = µz,i. The conditional distribution P (yi = 1|x,µz,i,θ), where µz,i is expectation of latent variable zi, can be expressed as: P (yi = 1|x,µz,θ) = σ(µz,i) = 1 1 + exp(−µz,i) (11) 3.5 LEARNING IN GCRFBCB MODEL In comparison with inference, learning procedure is more complicated. Evaluation of the conditional log likelihood is intractable, since latent variables cannot be analytically marginalized. The conditional log likelihood is expressed as: L ( Y |X,θ ) = log ∫ Z P (Y,Z|X,θ)dZ = M∑ j=1 log ∫ zj P (yj, zj |xj ,θ)dzj = M∑ j=1 Lj(yj |xj ,θ) (12) Lj(yj |xj ,θ) = log ∫ zj N∏ i=1 σ(zji) yji(1− σ(zji))1−yji exp(− 12 (zj − µj) TΣ−1j (zj − µj)) (2π)N/2 ∣∣Σj∣∣1/2 dzj (13) where Y ∈ RM×N is complete dataset of outputs, X ∈ RM×N×A is complete dataset of features, M is the total number of instances and A is the total number of features. Please note that each instance is structured, so while different instances are independent of each other, variables within one instance are dependent. One way to approximate integral in conditional log likelihood is by local variational approximation. Jaakkola & Jordan (2000) derived lower bound for sigmoid function, which can be expressed as: σ(x) > σ(ξ) exp{(x− ξ)/2− λ(ξ)(x2 − ξ2)} (14) where λ(ξ) = − 12ξ · [ σ(ξ)− 12 ] and ξ is a variational parameter. The Eq. 14 is called ξ transformation of sigmoid function and it yields maximum value when ξ = x. This approximation can be applied to the model defined by Eq. 13, but the variational approximation has to be further extended because of the product of sigmoid functions, such that: P (yj , zj |xj ,θ) = P (yj |zj)P (zj |xj ,θ) ≥ P (yj , zj |xj ,θ, ξj) (15) P (yj , zj |xj ,θ, ξj) = N∏ i=1 σ(ξji) exp ( zjiyji − zji + ξji 2 − λ(ξji)(z2ji − ξ2ji) ) · 1 (2π)N/2 ∣∣Σj∣∣1/2 exp ( −1 2 (zj − µj)TΣ−1j (zj − µj) ) (16) The Eq. 16 can be arranged in the form suitable for integration. Detailed derivation of lower bound of conditional log likelihood is presented in Appendix A. The lower bound of conditional log likelihood L(yj |xj ,θ, ξj) is defined as: Lj(yj |xj ,θ, ξj) = logP (yj |xj ,θ, ξj) = N∑ i=1 ( log σ(ξji)− ξji 2 + λ(ξji)ξ 2 ji ) − 1 2 µTj Σ −1 j µj + 1 2 mTj S −1 j mj + 1 2 log |Sj | (17) where: S−1j = Σ −1 j + 2Λj mj = Σj ( (yj − 1 2 I) + Σ−1j µj ) (18) Λj = λ(ξj1) 0 0 . . . 0 0 λ(ξj2) 0 . . . 0 ... ... ... . . . ... 0 0 0 . . . λ(ξjN ) (19) GCRFBCb uses the derivative of conditional log likelihood in order to find the optimal values for parameters α, β and matrix of variational parameters ξ ∈ RM×N . In order to ensure positive definiteness of normal distribution involved, it is sufficient to constrain parameteres α > 0 and β > 0. The partial derivatives of lower bound of conditional log likelihood are presented in Appendix B. For constrained optimization, the truncated Newton algorithm was used Nocedal & Wright (2006); Facchinei et al. (2002). The target function is not convex, so finding a global optimum cannot be guaranteed. 3.6 LEARNING IN GCRFBCNB MODEL In GCRFBCnb the mode of posterior distribution of continuous latent variable z is evaluated directly, so there is no need for approximation. The conditional log likelihood can be expressed as: L ( Y |X,θ,µ ) = logP (Y |X,θ,µ) = M∑ j=1 N∑ i=1 logP (yji|xj ,θ, µji) = M∑ j=1 N∑ i=1 Lji(yji|xj ,θ, µji) (20) Lji(yji|xj ,θ, µji) = yji log σ(µji) + (1− yji) log ( 1− σ(µji) ) (21) The partial derivatives of conditional log likelihood are presented in Appendix C. 4 EXPERIMENTAL EVALUATION Both proposed models were tested and compared on synthetic data and real-world tasks.1 All compared classifiers were compared in terms of the area under ROC curve (AUC) and accuracy 2 (ACC). Moreover, the lower bound (in case of GCRFBCb) of conditional log likelihood L ( Y |X,θ,µ ) and actual value (in case of GCRFBCnb) of conditional log likelihood L ( Y |X,θ ) of obtained values on synthetic test dataset were also reported. 4.1 SYNTHETIC DATASET The main goal of experiments on synthetic datasets was to examine models under various controlled conditions, and show advantages and disadvantages of each. In all experiments on synthetic datasets two different graphs were used (hence β ∈ R2) and two unstructured predictors (hence α ∈ R2). The results of experiments on synthetic datasets are presented in Appendix D. It can be noticed, that in cases where norm of the variances of latent variables is small, both models have equal performance considering AUC and conditional log likelihood L ( Y |X,θ ) . This is the case when values of parameters α used in data generating process are greater or equal to the 1Implementation can be found at https://github.com/andrijaster/GCRFBC B NB 2PyStruct package does not have option of returning SSVM and CRF confidence values for AUC evaluation values of parameters β. This means that the information provided by unstructured predictors is more important for classifications task than the information provided by output structure. Therefore, conditional distribution P (y, z|x,θ) is concentrated around mean value and MAP estimate is a satisfactory approximation. However, when data is generated from distribution with significantly higher values of β than α, the GCRFBCb performs significantly better than GCRFBCnb. For the larger values of variance norm, this difference is also large. This means that the structure between outputs has significant contribution to solving the classification task. It can be concluded that GCRFBCb has at least equal prediction performance as GCRFBCnb. Also, it can be argued that the models were generally able to utilize most of the information (from both features and the structure between outputs), which can be seen through AUC values. In addition, distribution of local variational parameters were analyzed during learning. It is noticed that in each epoch, the variance of this distribution is small and that the parameters can be clustered and their number significantly reduced. Therefore, it is possible to significantly lower down computational and memory costs of GCRFBCb learning procedure, but that’s out of the scope of this paper. 4.2 PERFORMANCE ON REAL-WORLD DATASETS 4.2.1 SKI LIFTS CONGESTION Data used in this research includes information on ski lift gate entrances in Kopaonik ski resorts, for the period March 15 to March 30 for the seasons from 2006 to 2011. The goal is to predict occurrence of crowding on ski lifts for 40 minutes in advance. Total number of instances in dataset was 4,850 for each ski lift, which is 33,950 in total. Relatively simple method for crowding detection was devised for labelling data. We assume that, if the crowding at some gate occurs, distributions of skiing times from other gates to that gate within some time window get shifted towards larger values. We model probability distribution of skiing time between two gates by the well-known parametric method of kernel density estimation (KDE) (Silverman, 2018). The distribution shift is measured with respect to the mode of the distribution. The dataset is generated by observing shifts in time windows of 5 minutes. When the mode of the distribution of skiing times within that window is greater than the mode for the whole time-span, the instance is labeled by 1 (crowding) and otherwise, it is labeled by 0 (no crowding). In order to obtain more information from the data distribution, additional 18 features were extracted. Four different unstructured predictors that were trained on each class separately were used: ridge logistic regression, LASSO logistic regression, neural network and random forest, whereas additional two unstructured predictors: decision tree and neural network were trained on all nodes together. Additionally, three structural support vector machine and two CRFs classifiers were used (Müller & Behnke, 2014). Fully connected graph of SSVM and CRF models are defined as SSVM-full and CRF-full, whereas Chow-Liu tree method for specifying edge connections are defined as SSVMtree and CRF-tree, respectively. In the SSVM-independent model the nodes of the graph are not connected. Six different weighted graphs were used to capture dependence structure between ski lifts (nodes):χ2 statistics on labels of training set, mutual information between labels, correlation matrix between outputs of over-fitted neural networks, norm of difference between vectors of labels and two graphs were defined based on difference of vectors of historical labels and on differences of historical averages of skier times. The AUC score and ACC of structured and unstructured predictors, along with the total computational time are shown in Table 1. It can be observed that GCRFBCb and GCRBCnb outperformed unstructured and other structured predictors in all cases. Based on evaluated parameters it could be concluded that dependence structure has significant impact on overall prediction performance, even though, due to low values of norm of variance, GCRFBCb and GCRFBCnb have equal AUC scores. It can be summarized that advantages of structured models compared to unstructured are obvious, but in this particular task due to equal prediction performance and its lower computational and memory complexity, GCRFBCnb is the best choice for this specific application. 4.2.2 MULTI-LABEL CLASSIFICATION OF MUSIC ACCORDING TO EMOTION The dataset used for this work consists of 100 songs from 7 different genres. The collection was created from 233 musical albums choosing three songs from each album. 8 rhythmic and 64 timbre features are extracted. The music is labeled in 6 categories of emotions: amazed-surprised, happypleased, relaxing-calm, quiet-still, sad-lonely and angry-fearful (Trohidis et al., 2008). Total number of instances in dataset was 593. Four different weighted graphs were used: statistics on labels of training set, mutual information between labels, correlation matrix between outputs of over-fitted neural networks and norm of difference between vectors of labels. Same unstructured predictors as in ski lift congestion problem were used, along with three structural support vector machine classifiers. The performances of models are evaluated by 10 fold cross validation. The AUC score and ACC of structured and unstructured predictors, along with the total computational time are shown in Table 2. It can be seen that GCRFBCb has achieved the best prediction performances. The ACC of GCRFBC models are significantly better than the SSVM performances. The AUC score and ACC of GCRGBCb are higher than the best result (AUC = 0.8237) presented in original paper (Trohidis et al., 2008). As in previous cases, computational time of GCRFBCb is significantly longer compared to GCRFBCnb and SSVM models. 4.2.3 GENE FUNCTION CLASSIFICATION This dataset is formed by micro-array expression data and phylogenetic profiles with 2417 genes (instances). The number of features is 103, whereas each gene is associated with the set of 14 groups (Elisseeff & Weston, 2002). The same unstructured, structured predictors and weighted graphs, as in music according to emotion classification, were used. The 10-fold cross validation results of the classification are shown in Table 3. It can be observed that both GCRFBCb and GCRFBCnb achieved significantly better results in comparison with unstructured predictors. However, neural network trained on all data together achieved the same ACC scores as GCRFBCb. The AUC of GCRFBCb has outperformed Random forest classifier by 19%, whereas SSVM - tree has better ACC compared to GCRFBCnb. It also outperformed GCRFCnb, but as expected, its computation time was longer. In addition, the computation time of CRFs models are longer compared to GCRFBCb 4.2.4 HIGHWAY CONGESTION The E70-E75 motorway is a major transit motorway in Serbia. With 504 kilometers, it is the one the major transit motorway in Serbia. It crosses the country from north-west to south, starting at Batrovci border crossing with the Republic of Croatia and ending with Preševo border crossing with the Republic of North Macedonia. One of the biggest problems in E70-E75 motorway is high congestion that frequently occurs. One of the reasons lies in lack of open toll stations. In order to mitigate congestion problem, it is necessary to predict its occurrence and open enough toll stations. Data used in this research includes information of car entrance and exit for the year 2017. Two different sections were analyzed: Belgrade - Adaševci and Niš - Belgrade. The section Belgrade - Adaševci was analyzed for the period of January 2017, whereas section Niš - Belgrade was analyzed for the period of April - July 2017. The congestion was labeled using the similar technique based on KDE as presented in the ski lifts congestion problem. Based on raw datasets for sections Niš - Belgrade and Belgrade - Adaševci with 5,132,918 and 487,767 instances, respectively, a new dataset for section Niš - Belgrade is generated by observing shifts in time windows of 10 minutes due to large number of vehicles, whereas in the case of section Belgrade - Adaševci the shifts are observed in time windows of 20 minutes. Total numbers of instances for sections Belgrade - Adaševci and Niš - Belgrade are 50,964 and 235,872, whereas numbers of highway exits (outputs) are 6 and 18, respectively. The extracted features are similar to the ones presented in ski congestion problem. The χ2 statistics, mutual information, correlation matrix and difference of vectors of historical labels were used to capture dependence structure, whereas the same unstructured predictors as in ski lifts congestion problem were evaluated. The classification results, validated by 10 fold cross validation, are presented in Table 4. The GCRFBCnb achieved the highest AUC and ACC scores in the section Belgrade - Adaševci, whereas GCRFBCb has better prediction performance in section Niš - Belgrade. Moreover, in case of section Niš - Belgrade, GCRFBCb has worse ACC score than fully connected CRF, whereas CRF-tree outperformed GCRFBCnb in section Belgrade - Adaševci 5 CONCLUSION In this paper, a new model, called Gaussian Conditional Random Fields for Binary Classification (GCRFBC) is presented. The model is based on latent GCRF structure, which means that intractable structured classification problem can become tractable and efficiently solved. Moreover, the improvements previously applied to regression GCRF can be easily extended to GCRFBC. Two different variants of GCRFBC were derived: GCRFBCb and GCRFBCnb. Empirical Bayes (marginalization of latent variables) by local variational methods is used in optimization procedure of GCFRBCb, whereas MAP estimate of latent variables is applied in GCRFBCnb. Based on presented methodology and obtained experimental results on synthetic and real-world datasets it can be concluded that both GCRFBCb and GCRFBCnb models have better prediction performance compared to the analysed structured unstructured predictors. Additionaly, GCRFBCb has better performance considering AUC score, ACC and lower bound of conditional log likelihood L ( Y |X,θ ) compared to GCRFBCnb, in cases where norm of the variances of latent variables is high. However, in cases where norm of the variances is close to zero, both models have equal prediction performance. Due to high memory and computational complexity of GCRFBCb compared to GCRFBCnb, in cases where norm of the variances is close to zero, it is reasonable to use GCRFBCnb. Additionally, the trade off between complexity and accuracy can be made in situation where norm of the variances is high. Further studies should address extending GCRFBC to structured multi-label classification problems, and lower computational complexity of GCRFBCb by considering efficient approximations. A DERIVATION OF LOWER BOUND OF CONDITIONAL LIKELIHOOD In this section we derive lower bound of conditional likelihood. In order to obtain suitable form of joint distribution that can be easily integrated, the lower bound for sigmoid function was used (Jaakkola & Jordan, 2000). The lower bound of joint distribution P (yj , zj |xj ,θ) can be expressed as: P (yj , zj |xj ,θ) = P (yj |zj)P (zj |xj ,θ) ≥ P (yj , zj |xj ,θ, ξj) (22) P (yj , zj |xj ,θ, ξj) = N∏ i=1 σ(ξji) exp ( zjiyji − zji + ξji 2 − λ(ξji)(z2ji − ξ2ji) ) · 1 (2π)N/2 ∣∣Σj∣∣1/2 exp ( −1 2 (zj − µj)TΣ−1j (zj − µj) ) (23) The simplified form of Eq. 23 can be represented by rearranging terms in the following form: P (yj , zj |xj ,θ, ξj) = T (ξj) exp ( zTj (yj − 1 2 I)− λzTj zj − 1 2 zTj Σ −1 j zj + z T j Σ −1 j µ ) (24) T (ξj) = 1 (2π)N/2 ∣∣Σj∣∣1/2 N∏ i=1 σ(ξji) exp ( −1 2 µTj Σ −1 j µj − ξji 2 + λ(ξji)ξ 2 ji ) (25) The lower bound of likelihood P (yj |xj ,θ, ξj) can be obtained by marginalization of zj as: P (yj |xj ,θ, ξj) = ∫ P (yj , zj |xj ,θ, ξj)dzj = T (ξj) ∫ exp ( zTj (yj − 1 2 I)− ΛjzTj zj − 1 2 zTj Σ −1 j zj + z T j Σ −1 j µj ) dzj = T (ξj) ∫ exp ( −1 2 zTj (Σ −1 j + 2Λj)zj+ zTj (Σ −1 j + 2Λj)(Σ −1 j + 2Λj) −1((yj − 1 2 I) + Σ−1j µj) ) dzj (26) The lower bound of likelihood P (yj |xj ,θ, ξj) can be transformed in the following form: P (yj |xj ,θ, ξj) = T (ξj) ∫ exp ( −1 2 (zj −mj)TS−1j (zj −mj) + 1 2 mTj S −1 j mj ) dzj = T (ξj) exp ( 1 2 mTj S −1 j mj )∫ exp ( −1 2 (zj −mj)TS−1j (zj −mj) ) dzj (27) where S−1j = Σ −1 j + 2Λj andmj = Σj ( (yj − 12I) + Σ −1 j µj ) . This integration is easily performed by noting that it is the integral over an unnormalized Gaussian distribution, which yields: P (yj |xj ,θ, ξj) = (2π)N/2 ∣∣Σj∣∣1/2 T (ξj) exp(1 2 mTj S −1mj ) |Sj |1/2 (28) The final form of the lower bound of conditional log likelihood Lj(yj |xj ,θ, ξj) is: Lj(yj |xj ,θ, ξj) = logP (yj |xj ,θ, ξj) = N∑ i=1 ( log σ(ξji)− ξji 2 + λ(ξji)ξ 2 ji ) − 1 2 µTj Σ −1 j µj + 1 2 mTj S −1 j mj + 1 2 log |Sj | (29) B PARTIAL DERIVATIVE OF LOWER BOUND OF CONDITIONAL LOG LIKELIHOOD The partial derivative of lower bound of conditional log likelihood (GCRFBCb) ∂Lj(yj |xj ,θ,ξj) ∂αk is computed as: ∂Lj(yj |xj ,θ, ξj) ∂αk =− 1 2 Tr ( Sj ∂S−1j ∂αk ) + ∂mTj ∂αk S−1j mj + 1 2 mTj ∂S−1j ∂αk mj − µTj ∂αk Σ−1j µj − 1 2 µTj ∂Σ−1j ∂αk + 1 2 Tr ( Σj ∂Σ−1j ∂αk ) (30) where: ∂S−1j ∂αk = ∂Σ−1j ∂αk = { 2, if i = j 0, if i 6= j (31) ∂mTj ∂αk = − ( yj − 1 2 I + µTj Σ −1 j ) Sj ∂S−1j ∂αk Sj + ∂µTj ∂αk Σ−1j Sj + µ T j ∂Σ−1j αk Sj (32) ∂µTj ∂αk = ( 2αkRk(x)− ∂Σ−1j ∂αk µj )T ΣTj (33) Similarly partial derivatives with respect to β can be defined as: ∂Lj(yj |xj ,θ, ξj) ∂βl =− 1 2 Tr ( Sj ∂S−1j ∂βl ) + ∂mTj ∂βl S−1j mj + 1 2 mTj ∂S−1j ∂βl mj − µTj ∂βl Σ−1j µj − 1 2 µTj ∂Σ−1j ∂βl + 1 2 Tr ( Σj ∂Σ−1j ∂βl ) (34) where: ∂S−1j ∂βl = ∂Σ−1j ∂βl = {∑N n=1 e l inS l in(x), if i = j −elijSlij(x), if i 6= j (35) ∂mTj ∂βl = − ( yj − 1 2 I + µTj Σ −1 j ) Sj ∂S−1j ∂βl Sj + ∂µTj ∂βl Σ−1j Sj + µ T j ∂Σ−1j βl Sj (36) ∂µTj ∂βl = ( − ∂Σ−1j ∂βl µj )T ΣTj (37) In the same manner partial derivatives of conditional log likelihood with respect to ξji are: ∂Lj(yj |xj ,θ, ξj) ∂ξji = −1 2 Tr ( 2Sj ∂Λj ∂ξji ) − [ 2 ( yj − 1 2 I ) Sj ∂Λj ∂ξji Sj ] S−1j mj +mTj ∂Λj ∂ξji mj + N∑ i=1 ( 1 σ(ξji) + 1 2 ξji ) ∂σ(ξji) ∂ξji + 1 2 ( σ(ξji)− 3 4 ) (38) where: ∂Λj ∂ξji = 0 0 0 . . . 0 ... . . . ... . . . ... 0 0 ∂λ(ξji) ∂ξji . . . 0 ... ... ... . . . ... 0 0 0 . . . 0 (39) ∂σ(ξji) ∂ξij = σ(ξji)(1− σ(ξji)) (40) ∂λ(ξji) ∂ξji = 1 2ξji ∂σ(ξji) ∂ξji − 1 2 ( σ(ξji)− 1 2 ) 1 ξ2ji (41) C PARTIAL DERIVATIVE OF CONDITIONAL LOG LIKELIHOOD The derivatives of the conditional log likelihood (GCRFBCnb) with respect to α and β are defined as, respectively: ∂Lji(yji|xj ,θ, µji) ∂αk = ( yji − σ(µji) ) ∂µji ∂αk (42) ∂Lji(yji|xj ,θ, µji) ∂αl = ( yji − σ(µji) ) ∂µji ∂βl (43) where ∂µji∂αk and ∂µji ∂βl are elements of the vectors ∂µj∂αk and ∂µj ∂βl and can be obtained by Eqs. 33 and 37, respectively. D SYNTHETIC DATASET RESULTS In order to generate and label graph nodes, edge weights S and unstructured predictor valuesR were randomly generated from uniform distribution. Besides, it was necessary to choose values of parameters α and β. Greater values of α indicate that the model is more confident about performance of unstructured predictors, whereas for the larger value of β the model is putting more emphasis on the dependence structure of output variables. Six different values of parametersα and β were used. In the first groupα and β have similar values, so unstructured predictors and dependence structure between outputs have similar importance. In the second group, α has higher values compared to β, which means that unstructured predictors are more important than the dependence structure. In the third group β has higher values than α, meaning that dependence structure is more important than unstructured predictors. Along with the AUC and conditional log likelihood, norm of the variances of latent variables (diagonal elements in the covariance matrix) is evaluated and presented in Table 5. In addition, the results of experiments are presented in Fig. 1, where for different values of α and β we show differences between GCRFBCb and GCRFBCnb (a) AUC scores, (b) log likelihoods, and (c) norm of the variances of latent variables.
1. What is the main contribution of the paper in the field of structured classification? 2. What are the strengths of the proposed approach, particularly in terms of its performance and justification? 3. Are there any concerns or questions regarding the technical aspects of the paper, such as derivations, minor typos, and numerical computations? 4. How could the paper be improved in terms of clarity and grammar? 5. What is the reviewer's opinion on the novelty of the contribution, and how does it compare to other works in the field? 6. Are there any suggestions for further improvements or additions to the paper, such as providing a better overview of competing structured classification methods?
Review
Review TITLE Gaussian Conditional Random Fields for Classification REVIEW SUMMARY A well justified approach to structured classification with demonstrated good performance. PAPER SUMMARY The paper presents methods for structured classification based on a Gaussian conditional random field combined with a softmax Bernoulli likelihood. Methods for inference and parameter learning are presented both for a "Bayesian" and maximum likelihood version. The method is demonstrated on several data sets. QUALITY In general, the technical quality of the paper is good. Except for minor typos, derivations appear to be correct, although I did not check everything in detail. CLARITY The paper could be improved by a careful revision with focus on improving grammar, but as it stands the paper is easy to follow. It is not clear to me exactly how the numbers in Table 1 were computed. Is this based on 10-fold crossvalidation as in the following tables? ORIGINALITY I am not familiar enough with the field to assess the novelty of the contribution. It would be great if the paper provided a better overview of competing structured classification methods. FURTHER COMMENTS "structured classification" ? "It was shown" -> We show "for given" Is the second sum over k=1 to K in eq. 1 a mistake? "We void" -> We avoid
ICLR
Title Gaussian Conditional Random Fields for Classification Abstract In this paper, a Gaussian conditional random field model for structured binary classification (GCRFBC) is proposed. The model is applicable to classification problems with undirected graphs, intractable for standard classification CRFs. The model representation of GCRFBC is extended by latent variables which yield some appealing properties. Thanks to the GCRF latent structure, the model becomes tractable, efficient, and open to improvements previously applied to GCRF regression. Two different forms of the algorithm are presented: GCRFBCb (GCRGBC Bayesian) and GCRFBCnb (GCRFBC non-Bayesian). The extended method of local variational approximation of sigmoid function is used for solving empirical Bayes in GCRFBCb variant, whereas MAP value of latent variables is the basis for learning and inference in the GCRFBCnb variant. The inference in GCRFBCb is solved by Newton-Cotes formulas for one-dimensional integration. Both models are evaluated on synthetic data and real-world data. It was shown that both models achieve better prediction performance than relevant baselines. Advantages and disadvantages of the proposed models are discussed. 1 INTRODUCTION Increased quantity and variety of sources of data with correlated outputs, so called structured data, created an opportunity for exploiting additional information between dependent outputs to achieve better prediction performance. One of the most successful probabilistic models for structured output classification problems are conditional random fields (CRF) (Sutton & McCallum, 2006). The main advantages of CRFs lie in their discriminatory nature, resulting in the relaxation of independence assumptions and the label bias problem that are present in many graphical models. Aside of many advantages, CRFs also have many drawbacks mostly resulting in high computational cost or intractability of inference and learning. A wide range of different approaches of tackling these problems has been proposed, and they motivate our work, too. One of the popular methods for structured regression based on CRFs – Gausian conditional random fields (GCRF) – has the form of multivariate Gaussian distribution (Radosavljevic et al., 2010). The main assumption of the model is that the relations between outputs are presented in quadratic form. It has convex loss function and, consequently, efficient inference and learning, and expensive sampling methods are not used. In this paper, a new model of Gaussian conditional random fields for binary classification is proposed (GCRFBC). GCRFBC builds upon regression GCRF model which is used to define latent variables over which output dependencies are defined. The model assumes that discrete outputs yi are conditionally independent conditioned on continuous latent variables zi which follow a distribution modeled by a GCRF. That way, relations between discrete outputs are not expressed directly. Two different inference and learning approaches are proposed in this paper. The first one is based on evaluating empirical Bayes by marginalizing latent variables (GCRFBCb), whereas MAP value of latent variables is the basis for learning and inference in the second model (GCRFBCnb). In order to derive GCRFBCb model and its learning procedure the variational approximation of Bayesian logistic regression (Jaakkola & Jordan, 2000) is generalized. Compared to CRFs and structured SVM classifiers, the GCRFBC models have some appealing properties: • The model is applicable to classification problems with undirected graphs, intractable for standard classification CRFs. Thanks to the GCRF latent structure, the model becomes tractable, efficient and open to improvements previously applied to GCRF regression models. • Defining correlations directly between discrete outputs may introduce unnecessary noise to the model (Tan et al., 2010). This problem can be solved by defining structured relations on a latent continuous variable space. • In case that unstructured predictors are unreliable, which is signaled by their large variance (diagonal elements in the covariance matrix), it is simple to marginalize over latent variable space and obtain better results. GCRFBC model is relying on the assumption that the underlying distribution of latent variables is multivariate normal distribution, due to that in the case when this distribution cannot be fitted well to the data (e.g. when the distribution of latent variables is multimodal) the model will not perform as well as it is expected. The proposed models are experimentally tested on both synthetic and real-world datasets in terms of predictive performance and computation time. In experiments with synthetic datasets, the results clearly indicate that the the empirical Bayes approach (GCRFBCb) better exploits output dependence structure, more so as the variance of the latent variables increases. We also tested both approaches on real-world datasets of predicting ski lift congestion, gene function classification, classification of music according to emotion and highway congestion. Both GCRFBC models outperformed ridge logistic regression, lasso logistic regression, neural network, random forest, and structured SVM classifiers, demonstrating that the proposed models can exploit output dependencies in a real-world setting. 2 RELATED WORK An extensive review of binary and multi-label classification with structured output is provided in Su (2015). A number of different studies related to graph based methods for regression can be found in the literature (Fox, 2015). CRFs were successfully applied on a variety of different structured tasks (Cotterell & Duh, 2017; Zhang et al., 2015; Masada & Bunescu, 2017; Zia et al., 2018) and different model adaptations can be found in literature Kim (2017); Maaten et al. (2011). Recently, successful unifications of deep learning and CRFs have been proposed Chen et al. (2016); Kosov et al. (2018). Moreover, implementation of deep neural networks as potential functions is presented in form of structure prediction energy networks (SPEN) Belanger & McCallum (2016); Belanger et al. (2017). Adaptation of normalazing flows in SPEN structure is presented in Lu & Huang (2019). An extensive review on topic of binary and multi-label classification with structured output is provided in Su (2015). Large number of different studies related to graph based methods for regression can be found in the literature (Fox, 2015). CRFs were successfully applied on a variety of different structured tasks, such as: low-resource named entity recognition (Cotterell & Duh, 2017), image segmentation (Zhang et al., 2015), chord recognition (Masada & Bunescu, 2017) and word segmentation (Zia et al., 2018). The mixture of CRFs capable to model data that come from multiple different sources or domains is presented in Kim (2017). The method is related to the well known hidden-unit CRF (HUCRF) (Maaten et al., 2011). The conditional likelihood and expectation minimization (EM) procedure for learning have been derived there. The mixtures of CRF models were implemented on several real-world applications resulting in prediction improvement. Recently, a model based on unification of deep learning and CRF was developed by Chen et al. (2016). The deep CRF model showed better performance compared to either shallow CRFs or deep learning methods on their own. Similarly, the combination of CRFs and deep convolutional neural networks was evaluated on an example of environmental microorganisms labeling (Kosov et al., 2018). The spatial relations among outputs were taken in consideration and experimental results have shown satisfactory results. The GCRF model was first implemented for the task of low-level computer vision (Tappen et al., 2007). Since then, various different adaptations and approximations of GCRF were proposed (Radosavljevic et al., 2014). The parameter space for the GCRF model is extended to facilitate joint modelling of positive and negative influences (Glass et al., 2016). In addition, the model is extended by bias term into link weight and solved as a part of convex optimization. Semi-supervised marginalized Gaussian conditional random fields (MGCRF) model for dealing with missing variables was proposed by Stojanovic et al. (2015). The benefits of the model were proved on partially observed data and showed better prediction performance than alternative semi-supervised structured models.A comprehensive review of continuous conditional random fields (CCRF) was provided in Radosavljevic et al. (2010). The sparse conditional random fields obtained by l1 regularization are first proposed and evaluated by Wytock & Kolter (2013). Additionaly, Frot et al. (2018) presented GCRF with the latent variable decomposition and derived convergence bounds for the estimator that is well behaved in high dimensional regime. An adaptation of GCRF on discrete output was briefly discussed in Radosavljevic (2011), as a part of future work. This discussion motivates our work, but our approach is different in technical aspects. 3 METHODOLOGY In this section we first present already known GCRF model for regression and then we propose GCRFBC model for binary classification and two approaches to inference and learning. 3.1 BACKGROUND MATERIAL GCRF is a discriminative graph-based regression model (Radosavljevic et al., 2010). Nodes of the graph are variables y = (y1, y2, . . . , yN ), which need to be predicted given a set of features x. The attributes x = (x1,x2, . . . ,xN ) interact with each node yi independently of one another, while the relations between outputs are expressed by pairwise interaction function. In order to learn parameters of the model, a training set of vectors of attributes x and real-valued response variables y are provided. The generalized form of the conditional distribution P ( y|x,α,β ) is: P ( y|x,α,β ) = 1 Z (x,α,β) exp − N∑ i=1 K∑ k=1 αk ( yi −Rk (xi) )2 −∑ i 6=j L∑ l=1 βlS l ij(yi − yj)2 (1) First sum models relations between outputs yi and corresponding input vector xi and the second one models pairwise relations between nodes. Rk(xi) represents an unstructured predictor of yi for each node in the graph and Slij is value that expresses similarity between nodes i and j in graph l. Unstructured predictor can be any regression model that gives prediction of output yi for given attributes xi. K is the total number of unstructured predictors. L is the total number of graphs (similarity functions). Graphs can express any kind of binary relations between nodes e.g., spatial and temporal correlations between outputs. Z is a partition function and vectors α and β are learnable parameters. One of the main advantages of GCRF is the ability to express different relations between outputs by variety of graphs and ability to learn which graphs are significant for prediction. The quadratic form of interaction and association potential enables conditional distribution P (y|x,α,β) to be expressed as multivariate Gaussian distribution (Radosavljevic et al., 2010): P (y|x,α,β) = 1 (2π) N 2 |Σ| 12 exp ( −1 2 (y − µ)TΣ−1(y − µ) ) (2) Precision matrix Σ−1 = 2Q and distribution mean µ = Σb are defined as, respectively: Q = {∑K k=1 αk + ∑N h=1 ∑L l=1 βlS l ih, if i = j − ∑L l=1 βlS l ij , if i 6= j (3) bi = 2 K∑ k=1 αkRk(xi) (4) Due to concavity of multivariate Gaussian distribution, the inference task argmax y P (y|x,α,β) is straightforward. The maximum posterior estimate of y is the distribution expectation µ. The objective of the learning task is to optimize parameters α and β by maximizing conditional log likelihood argmax α,β ∑ y logP (y|x,α,β). One way to ensure positive definiteness of the covariance matrix of GCRF is to require diagonal dominance (Strang et al., 1993). This can be ensured by imposing constraints that all elements of α and β be greater than 0 (Radosavljevic et al., 2010). 3.2 GCRFBC MODEL REPRESENTATION One way of adapting GCRF to classification problem is by approximating discrete outputs by suitably defining continuous outputs. Namely, GCRF can provide dependence structure over continuous variables which can be passed through sigmoid function. That way the relationship between regression GCRF and classification GCRF is similar to the relationship between linear and logistic regression, but with dependent variables. Aside from allowing us to define a classification variant of GCRF, this may result in additional appealing properties: (i) The model is applicable to classification problems with undirected graphs, intractable for standard classification CRFs. Thanks to the GCRF latent structure, the model becomes tractable, efficient and open to improvements previously applied to GCRF regression models. (ii) Defining correlations directly between discrete outputs may introduce unnecessary noise to the model (Tan et al., 2010). We avoid this problem by defining structured relations on a latent continuous variable space. (iii) In case that unstructured predictors are unreliable, which is signaled by their large variance (diagonal elements in the covariance matrix), it is simple to marginalize over latent variable space and obtain better results. It is assumed that yi are discrete binary outputs and zi are continuous latent variables assigned to each yi. Each output yi is conditionally independent of the others, given zi. The conditional probability distribution P (yi|zi) is defined as Bernoulli distribution: P (yi|zi) = Ber(yi|σ(zi)) = σ(zi)yi(1− σ(zi))1−yi (5) where σ(·) is sigmoid function. Due to conditional independence assumption, the joint distribution of outputs yi can be expressed as: P (y1, y2, . . . , yN |z) = N∏ i=1 σ(zi) yi(1− σ(zi))1−yi (6) Furthermore, the conditional distribution P (z|x) is the same as in the classical GCRF model and has canonical form defined by multivariate Gaussian distribution. Hence, joint distribution of continuous latent variables z and outputs y given x and θ = (α1, . . . , αK , β1, . . . , βL) is is the general form of the GCRFBC model defined as: P (y, z|x,θ) = N∏ i=1 σ(zi) yi(1− σ(zi))1−yi · 1 (2π)N/2 ∣∣Σ(x,θ)∣∣1/2 · exp ( −1 2 (z − µ(x, θ))TΣ−1(x,θ)(z − µ(x, θ)) ) (7) We consider two ways of inference and learning in GCRFBC model: (i) GCRFBCb - with conditional probability distribution P (y|x,θ), in which variables z are marginalized over, and (ii) GCRFBCnb - with conditional probability distribution P ( y|x,θ, µz ) , in which variables z are substituted by their expectations. 3.3 INFERENCE IN GCRFBCB MODEL Prediction of discrete outputs y for given features x and parameters θ is analytically intractable due to integration of the joint distribution P (y, z|x,θ) with respect to latent variables. However, due to conditional independence between nodes, it is possible to obtain P (yi = 1|x,θ). P (yi = 1|x,θ) = ∫ z σ(zi)P (z|x,θ)dz (8) where σ(zi) models P (yi|z). As a result of independence properties of the distribution, it holds P (yi = 1|z) = P (yi = 1|zi), and it is possible to marginalize P (z|x,θ) with respect to latent variables z′ = (z1, . . . , zi−1, zi+1, . . . , zN ): P (yi = 1|x,θ) = ∫ zi σ(zi) (∫ z′ P (z′, zi|x,θ)dz′ ) dzi (9) where ∫ z′ P (z′, zi|x,θ)dz′ is normal distribution with mean µ = µi and variance σ2i = Σii. Therefore, it holds: P (yi = 1|x,θ) = ∫ +∞ −∞ σ(zi)N (zi|µi, σ2i )dzi (10) The evaluation of P (yi = 0|x,θ) is straightforward: P (yi = 0|x,θ) = 1− P (yi = 1|x,θ). The one-dimensional integral is still analytically intractable, but can be effectively evaluated by onedimensional numerical integration. The proposed inference approach can be effectively used in case of huge number of nodes, due to low computational cost of one-dimensional numerical integration. 3.4 INFERENCE IN GCRFBCNB MODEL The inference procedure in GCRFBCnb is much simpler, because marginalization with respect to latent variables is not performed. To predict y, it is necessary to evaluate posterior maximum of latent variable zmax = argmax z P (z|x,θ), which is straightforward due to normal form of GCRF. Therefore, it holds zmax = µz,i. The conditional distribution P (yi = 1|x,µz,i,θ), where µz,i is expectation of latent variable zi, can be expressed as: P (yi = 1|x,µz,θ) = σ(µz,i) = 1 1 + exp(−µz,i) (11) 3.5 LEARNING IN GCRFBCB MODEL In comparison with inference, learning procedure is more complicated. Evaluation of the conditional log likelihood is intractable, since latent variables cannot be analytically marginalized. The conditional log likelihood is expressed as: L ( Y |X,θ ) = log ∫ Z P (Y,Z|X,θ)dZ = M∑ j=1 log ∫ zj P (yj, zj |xj ,θ)dzj = M∑ j=1 Lj(yj |xj ,θ) (12) Lj(yj |xj ,θ) = log ∫ zj N∏ i=1 σ(zji) yji(1− σ(zji))1−yji exp(− 12 (zj − µj) TΣ−1j (zj − µj)) (2π)N/2 ∣∣Σj∣∣1/2 dzj (13) where Y ∈ RM×N is complete dataset of outputs, X ∈ RM×N×A is complete dataset of features, M is the total number of instances and A is the total number of features. Please note that each instance is structured, so while different instances are independent of each other, variables within one instance are dependent. One way to approximate integral in conditional log likelihood is by local variational approximation. Jaakkola & Jordan (2000) derived lower bound for sigmoid function, which can be expressed as: σ(x) > σ(ξ) exp{(x− ξ)/2− λ(ξ)(x2 − ξ2)} (14) where λ(ξ) = − 12ξ · [ σ(ξ)− 12 ] and ξ is a variational parameter. The Eq. 14 is called ξ transformation of sigmoid function and it yields maximum value when ξ = x. This approximation can be applied to the model defined by Eq. 13, but the variational approximation has to be further extended because of the product of sigmoid functions, such that: P (yj , zj |xj ,θ) = P (yj |zj)P (zj |xj ,θ) ≥ P (yj , zj |xj ,θ, ξj) (15) P (yj , zj |xj ,θ, ξj) = N∏ i=1 σ(ξji) exp ( zjiyji − zji + ξji 2 − λ(ξji)(z2ji − ξ2ji) ) · 1 (2π)N/2 ∣∣Σj∣∣1/2 exp ( −1 2 (zj − µj)TΣ−1j (zj − µj) ) (16) The Eq. 16 can be arranged in the form suitable for integration. Detailed derivation of lower bound of conditional log likelihood is presented in Appendix A. The lower bound of conditional log likelihood L(yj |xj ,θ, ξj) is defined as: Lj(yj |xj ,θ, ξj) = logP (yj |xj ,θ, ξj) = N∑ i=1 ( log σ(ξji)− ξji 2 + λ(ξji)ξ 2 ji ) − 1 2 µTj Σ −1 j µj + 1 2 mTj S −1 j mj + 1 2 log |Sj | (17) where: S−1j = Σ −1 j + 2Λj mj = Σj ( (yj − 1 2 I) + Σ−1j µj ) (18) Λj = λ(ξj1) 0 0 . . . 0 0 λ(ξj2) 0 . . . 0 ... ... ... . . . ... 0 0 0 . . . λ(ξjN ) (19) GCRFBCb uses the derivative of conditional log likelihood in order to find the optimal values for parameters α, β and matrix of variational parameters ξ ∈ RM×N . In order to ensure positive definiteness of normal distribution involved, it is sufficient to constrain parameteres α > 0 and β > 0. The partial derivatives of lower bound of conditional log likelihood are presented in Appendix B. For constrained optimization, the truncated Newton algorithm was used Nocedal & Wright (2006); Facchinei et al. (2002). The target function is not convex, so finding a global optimum cannot be guaranteed. 3.6 LEARNING IN GCRFBCNB MODEL In GCRFBCnb the mode of posterior distribution of continuous latent variable z is evaluated directly, so there is no need for approximation. The conditional log likelihood can be expressed as: L ( Y |X,θ,µ ) = logP (Y |X,θ,µ) = M∑ j=1 N∑ i=1 logP (yji|xj ,θ, µji) = M∑ j=1 N∑ i=1 Lji(yji|xj ,θ, µji) (20) Lji(yji|xj ,θ, µji) = yji log σ(µji) + (1− yji) log ( 1− σ(µji) ) (21) The partial derivatives of conditional log likelihood are presented in Appendix C. 4 EXPERIMENTAL EVALUATION Both proposed models were tested and compared on synthetic data and real-world tasks.1 All compared classifiers were compared in terms of the area under ROC curve (AUC) and accuracy 2 (ACC). Moreover, the lower bound (in case of GCRFBCb) of conditional log likelihood L ( Y |X,θ,µ ) and actual value (in case of GCRFBCnb) of conditional log likelihood L ( Y |X,θ ) of obtained values on synthetic test dataset were also reported. 4.1 SYNTHETIC DATASET The main goal of experiments on synthetic datasets was to examine models under various controlled conditions, and show advantages and disadvantages of each. In all experiments on synthetic datasets two different graphs were used (hence β ∈ R2) and two unstructured predictors (hence α ∈ R2). The results of experiments on synthetic datasets are presented in Appendix D. It can be noticed, that in cases where norm of the variances of latent variables is small, both models have equal performance considering AUC and conditional log likelihood L ( Y |X,θ ) . This is the case when values of parameters α used in data generating process are greater or equal to the 1Implementation can be found at https://github.com/andrijaster/GCRFBC B NB 2PyStruct package does not have option of returning SSVM and CRF confidence values for AUC evaluation values of parameters β. This means that the information provided by unstructured predictors is more important for classifications task than the information provided by output structure. Therefore, conditional distribution P (y, z|x,θ) is concentrated around mean value and MAP estimate is a satisfactory approximation. However, when data is generated from distribution with significantly higher values of β than α, the GCRFBCb performs significantly better than GCRFBCnb. For the larger values of variance norm, this difference is also large. This means that the structure between outputs has significant contribution to solving the classification task. It can be concluded that GCRFBCb has at least equal prediction performance as GCRFBCnb. Also, it can be argued that the models were generally able to utilize most of the information (from both features and the structure between outputs), which can be seen through AUC values. In addition, distribution of local variational parameters were analyzed during learning. It is noticed that in each epoch, the variance of this distribution is small and that the parameters can be clustered and their number significantly reduced. Therefore, it is possible to significantly lower down computational and memory costs of GCRFBCb learning procedure, but that’s out of the scope of this paper. 4.2 PERFORMANCE ON REAL-WORLD DATASETS 4.2.1 SKI LIFTS CONGESTION Data used in this research includes information on ski lift gate entrances in Kopaonik ski resorts, for the period March 15 to March 30 for the seasons from 2006 to 2011. The goal is to predict occurrence of crowding on ski lifts for 40 minutes in advance. Total number of instances in dataset was 4,850 for each ski lift, which is 33,950 in total. Relatively simple method for crowding detection was devised for labelling data. We assume that, if the crowding at some gate occurs, distributions of skiing times from other gates to that gate within some time window get shifted towards larger values. We model probability distribution of skiing time between two gates by the well-known parametric method of kernel density estimation (KDE) (Silverman, 2018). The distribution shift is measured with respect to the mode of the distribution. The dataset is generated by observing shifts in time windows of 5 minutes. When the mode of the distribution of skiing times within that window is greater than the mode for the whole time-span, the instance is labeled by 1 (crowding) and otherwise, it is labeled by 0 (no crowding). In order to obtain more information from the data distribution, additional 18 features were extracted. Four different unstructured predictors that were trained on each class separately were used: ridge logistic regression, LASSO logistic regression, neural network and random forest, whereas additional two unstructured predictors: decision tree and neural network were trained on all nodes together. Additionally, three structural support vector machine and two CRFs classifiers were used (Müller & Behnke, 2014). Fully connected graph of SSVM and CRF models are defined as SSVM-full and CRF-full, whereas Chow-Liu tree method for specifying edge connections are defined as SSVMtree and CRF-tree, respectively. In the SSVM-independent model the nodes of the graph are not connected. Six different weighted graphs were used to capture dependence structure between ski lifts (nodes):χ2 statistics on labels of training set, mutual information between labels, correlation matrix between outputs of over-fitted neural networks, norm of difference between vectors of labels and two graphs were defined based on difference of vectors of historical labels and on differences of historical averages of skier times. The AUC score and ACC of structured and unstructured predictors, along with the total computational time are shown in Table 1. It can be observed that GCRFBCb and GCRBCnb outperformed unstructured and other structured predictors in all cases. Based on evaluated parameters it could be concluded that dependence structure has significant impact on overall prediction performance, even though, due to low values of norm of variance, GCRFBCb and GCRFBCnb have equal AUC scores. It can be summarized that advantages of structured models compared to unstructured are obvious, but in this particular task due to equal prediction performance and its lower computational and memory complexity, GCRFBCnb is the best choice for this specific application. 4.2.2 MULTI-LABEL CLASSIFICATION OF MUSIC ACCORDING TO EMOTION The dataset used for this work consists of 100 songs from 7 different genres. The collection was created from 233 musical albums choosing three songs from each album. 8 rhythmic and 64 timbre features are extracted. The music is labeled in 6 categories of emotions: amazed-surprised, happypleased, relaxing-calm, quiet-still, sad-lonely and angry-fearful (Trohidis et al., 2008). Total number of instances in dataset was 593. Four different weighted graphs were used: statistics on labels of training set, mutual information between labels, correlation matrix between outputs of over-fitted neural networks and norm of difference between vectors of labels. Same unstructured predictors as in ski lift congestion problem were used, along with three structural support vector machine classifiers. The performances of models are evaluated by 10 fold cross validation. The AUC score and ACC of structured and unstructured predictors, along with the total computational time are shown in Table 2. It can be seen that GCRFBCb has achieved the best prediction performances. The ACC of GCRFBC models are significantly better than the SSVM performances. The AUC score and ACC of GCRGBCb are higher than the best result (AUC = 0.8237) presented in original paper (Trohidis et al., 2008). As in previous cases, computational time of GCRFBCb is significantly longer compared to GCRFBCnb and SSVM models. 4.2.3 GENE FUNCTION CLASSIFICATION This dataset is formed by micro-array expression data and phylogenetic profiles with 2417 genes (instances). The number of features is 103, whereas each gene is associated with the set of 14 groups (Elisseeff & Weston, 2002). The same unstructured, structured predictors and weighted graphs, as in music according to emotion classification, were used. The 10-fold cross validation results of the classification are shown in Table 3. It can be observed that both GCRFBCb and GCRFBCnb achieved significantly better results in comparison with unstructured predictors. However, neural network trained on all data together achieved the same ACC scores as GCRFBCb. The AUC of GCRFBCb has outperformed Random forest classifier by 19%, whereas SSVM - tree has better ACC compared to GCRFBCnb. It also outperformed GCRFCnb, but as expected, its computation time was longer. In addition, the computation time of CRFs models are longer compared to GCRFBCb 4.2.4 HIGHWAY CONGESTION The E70-E75 motorway is a major transit motorway in Serbia. With 504 kilometers, it is the one the major transit motorway in Serbia. It crosses the country from north-west to south, starting at Batrovci border crossing with the Republic of Croatia and ending with Preševo border crossing with the Republic of North Macedonia. One of the biggest problems in E70-E75 motorway is high congestion that frequently occurs. One of the reasons lies in lack of open toll stations. In order to mitigate congestion problem, it is necessary to predict its occurrence and open enough toll stations. Data used in this research includes information of car entrance and exit for the year 2017. Two different sections were analyzed: Belgrade - Adaševci and Niš - Belgrade. The section Belgrade - Adaševci was analyzed for the period of January 2017, whereas section Niš - Belgrade was analyzed for the period of April - July 2017. The congestion was labeled using the similar technique based on KDE as presented in the ski lifts congestion problem. Based on raw datasets for sections Niš - Belgrade and Belgrade - Adaševci with 5,132,918 and 487,767 instances, respectively, a new dataset for section Niš - Belgrade is generated by observing shifts in time windows of 10 minutes due to large number of vehicles, whereas in the case of section Belgrade - Adaševci the shifts are observed in time windows of 20 minutes. Total numbers of instances for sections Belgrade - Adaševci and Niš - Belgrade are 50,964 and 235,872, whereas numbers of highway exits (outputs) are 6 and 18, respectively. The extracted features are similar to the ones presented in ski congestion problem. The χ2 statistics, mutual information, correlation matrix and difference of vectors of historical labels were used to capture dependence structure, whereas the same unstructured predictors as in ski lifts congestion problem were evaluated. The classification results, validated by 10 fold cross validation, are presented in Table 4. The GCRFBCnb achieved the highest AUC and ACC scores in the section Belgrade - Adaševci, whereas GCRFBCb has better prediction performance in section Niš - Belgrade. Moreover, in case of section Niš - Belgrade, GCRFBCb has worse ACC score than fully connected CRF, whereas CRF-tree outperformed GCRFBCnb in section Belgrade - Adaševci 5 CONCLUSION In this paper, a new model, called Gaussian Conditional Random Fields for Binary Classification (GCRFBC) is presented. The model is based on latent GCRF structure, which means that intractable structured classification problem can become tractable and efficiently solved. Moreover, the improvements previously applied to regression GCRF can be easily extended to GCRFBC. Two different variants of GCRFBC were derived: GCRFBCb and GCRFBCnb. Empirical Bayes (marginalization of latent variables) by local variational methods is used in optimization procedure of GCFRBCb, whereas MAP estimate of latent variables is applied in GCRFBCnb. Based on presented methodology and obtained experimental results on synthetic and real-world datasets it can be concluded that both GCRFBCb and GCRFBCnb models have better prediction performance compared to the analysed structured unstructured predictors. Additionaly, GCRFBCb has better performance considering AUC score, ACC and lower bound of conditional log likelihood L ( Y |X,θ ) compared to GCRFBCnb, in cases where norm of the variances of latent variables is high. However, in cases where norm of the variances is close to zero, both models have equal prediction performance. Due to high memory and computational complexity of GCRFBCb compared to GCRFBCnb, in cases where norm of the variances is close to zero, it is reasonable to use GCRFBCnb. Additionally, the trade off between complexity and accuracy can be made in situation where norm of the variances is high. Further studies should address extending GCRFBC to structured multi-label classification problems, and lower computational complexity of GCRFBCb by considering efficient approximations. A DERIVATION OF LOWER BOUND OF CONDITIONAL LIKELIHOOD In this section we derive lower bound of conditional likelihood. In order to obtain suitable form of joint distribution that can be easily integrated, the lower bound for sigmoid function was used (Jaakkola & Jordan, 2000). The lower bound of joint distribution P (yj , zj |xj ,θ) can be expressed as: P (yj , zj |xj ,θ) = P (yj |zj)P (zj |xj ,θ) ≥ P (yj , zj |xj ,θ, ξj) (22) P (yj , zj |xj ,θ, ξj) = N∏ i=1 σ(ξji) exp ( zjiyji − zji + ξji 2 − λ(ξji)(z2ji − ξ2ji) ) · 1 (2π)N/2 ∣∣Σj∣∣1/2 exp ( −1 2 (zj − µj)TΣ−1j (zj − µj) ) (23) The simplified form of Eq. 23 can be represented by rearranging terms in the following form: P (yj , zj |xj ,θ, ξj) = T (ξj) exp ( zTj (yj − 1 2 I)− λzTj zj − 1 2 zTj Σ −1 j zj + z T j Σ −1 j µ ) (24) T (ξj) = 1 (2π)N/2 ∣∣Σj∣∣1/2 N∏ i=1 σ(ξji) exp ( −1 2 µTj Σ −1 j µj − ξji 2 + λ(ξji)ξ 2 ji ) (25) The lower bound of likelihood P (yj |xj ,θ, ξj) can be obtained by marginalization of zj as: P (yj |xj ,θ, ξj) = ∫ P (yj , zj |xj ,θ, ξj)dzj = T (ξj) ∫ exp ( zTj (yj − 1 2 I)− ΛjzTj zj − 1 2 zTj Σ −1 j zj + z T j Σ −1 j µj ) dzj = T (ξj) ∫ exp ( −1 2 zTj (Σ −1 j + 2Λj)zj+ zTj (Σ −1 j + 2Λj)(Σ −1 j + 2Λj) −1((yj − 1 2 I) + Σ−1j µj) ) dzj (26) The lower bound of likelihood P (yj |xj ,θ, ξj) can be transformed in the following form: P (yj |xj ,θ, ξj) = T (ξj) ∫ exp ( −1 2 (zj −mj)TS−1j (zj −mj) + 1 2 mTj S −1 j mj ) dzj = T (ξj) exp ( 1 2 mTj S −1 j mj )∫ exp ( −1 2 (zj −mj)TS−1j (zj −mj) ) dzj (27) where S−1j = Σ −1 j + 2Λj andmj = Σj ( (yj − 12I) + Σ −1 j µj ) . This integration is easily performed by noting that it is the integral over an unnormalized Gaussian distribution, which yields: P (yj |xj ,θ, ξj) = (2π)N/2 ∣∣Σj∣∣1/2 T (ξj) exp(1 2 mTj S −1mj ) |Sj |1/2 (28) The final form of the lower bound of conditional log likelihood Lj(yj |xj ,θ, ξj) is: Lj(yj |xj ,θ, ξj) = logP (yj |xj ,θ, ξj) = N∑ i=1 ( log σ(ξji)− ξji 2 + λ(ξji)ξ 2 ji ) − 1 2 µTj Σ −1 j µj + 1 2 mTj S −1 j mj + 1 2 log |Sj | (29) B PARTIAL DERIVATIVE OF LOWER BOUND OF CONDITIONAL LOG LIKELIHOOD The partial derivative of lower bound of conditional log likelihood (GCRFBCb) ∂Lj(yj |xj ,θ,ξj) ∂αk is computed as: ∂Lj(yj |xj ,θ, ξj) ∂αk =− 1 2 Tr ( Sj ∂S−1j ∂αk ) + ∂mTj ∂αk S−1j mj + 1 2 mTj ∂S−1j ∂αk mj − µTj ∂αk Σ−1j µj − 1 2 µTj ∂Σ−1j ∂αk + 1 2 Tr ( Σj ∂Σ−1j ∂αk ) (30) where: ∂S−1j ∂αk = ∂Σ−1j ∂αk = { 2, if i = j 0, if i 6= j (31) ∂mTj ∂αk = − ( yj − 1 2 I + µTj Σ −1 j ) Sj ∂S−1j ∂αk Sj + ∂µTj ∂αk Σ−1j Sj + µ T j ∂Σ−1j αk Sj (32) ∂µTj ∂αk = ( 2αkRk(x)− ∂Σ−1j ∂αk µj )T ΣTj (33) Similarly partial derivatives with respect to β can be defined as: ∂Lj(yj |xj ,θ, ξj) ∂βl =− 1 2 Tr ( Sj ∂S−1j ∂βl ) + ∂mTj ∂βl S−1j mj + 1 2 mTj ∂S−1j ∂βl mj − µTj ∂βl Σ−1j µj − 1 2 µTj ∂Σ−1j ∂βl + 1 2 Tr ( Σj ∂Σ−1j ∂βl ) (34) where: ∂S−1j ∂βl = ∂Σ−1j ∂βl = {∑N n=1 e l inS l in(x), if i = j −elijSlij(x), if i 6= j (35) ∂mTj ∂βl = − ( yj − 1 2 I + µTj Σ −1 j ) Sj ∂S−1j ∂βl Sj + ∂µTj ∂βl Σ−1j Sj + µ T j ∂Σ−1j βl Sj (36) ∂µTj ∂βl = ( − ∂Σ−1j ∂βl µj )T ΣTj (37) In the same manner partial derivatives of conditional log likelihood with respect to ξji are: ∂Lj(yj |xj ,θ, ξj) ∂ξji = −1 2 Tr ( 2Sj ∂Λj ∂ξji ) − [ 2 ( yj − 1 2 I ) Sj ∂Λj ∂ξji Sj ] S−1j mj +mTj ∂Λj ∂ξji mj + N∑ i=1 ( 1 σ(ξji) + 1 2 ξji ) ∂σ(ξji) ∂ξji + 1 2 ( σ(ξji)− 3 4 ) (38) where: ∂Λj ∂ξji = 0 0 0 . . . 0 ... . . . ... . . . ... 0 0 ∂λ(ξji) ∂ξji . . . 0 ... ... ... . . . ... 0 0 0 . . . 0 (39) ∂σ(ξji) ∂ξij = σ(ξji)(1− σ(ξji)) (40) ∂λ(ξji) ∂ξji = 1 2ξji ∂σ(ξji) ∂ξji − 1 2 ( σ(ξji)− 1 2 ) 1 ξ2ji (41) C PARTIAL DERIVATIVE OF CONDITIONAL LOG LIKELIHOOD The derivatives of the conditional log likelihood (GCRFBCnb) with respect to α and β are defined as, respectively: ∂Lji(yji|xj ,θ, µji) ∂αk = ( yji − σ(µji) ) ∂µji ∂αk (42) ∂Lji(yji|xj ,θ, µji) ∂αl = ( yji − σ(µji) ) ∂µji ∂βl (43) where ∂µji∂αk and ∂µji ∂βl are elements of the vectors ∂µj∂αk and ∂µj ∂βl and can be obtained by Eqs. 33 and 37, respectively. D SYNTHETIC DATASET RESULTS In order to generate and label graph nodes, edge weights S and unstructured predictor valuesR were randomly generated from uniform distribution. Besides, it was necessary to choose values of parameters α and β. Greater values of α indicate that the model is more confident about performance of unstructured predictors, whereas for the larger value of β the model is putting more emphasis on the dependence structure of output variables. Six different values of parametersα and β were used. In the first groupα and β have similar values, so unstructured predictors and dependence structure between outputs have similar importance. In the second group, α has higher values compared to β, which means that unstructured predictors are more important than the dependence structure. In the third group β has higher values than α, meaning that dependence structure is more important than unstructured predictors. Along with the AUC and conditional log likelihood, norm of the variances of latent variables (diagonal elements in the covariance matrix) is evaluated and presented in Table 5. In addition, the results of experiments are presented in Fig. 1, where for different values of α and β we show differences between GCRFBCb and GCRFBCnb (a) AUC scores, (b) log likelihoods, and (c) norm of the variances of latent variables.
1. What is the focus of the paper regarding GRFs? 2. What are the strengths of the proposed approach, particularly in terms of simplicity and ease of understanding? 3. What are the weaknesses of the paper, especially regarding novelty and convincing arguments? 4. Can the reviewer explain their concern about the appropriateness of the conference for the paper? 5. What are some potential ways to improve the paper, such as exploring different loss functions or expanding the scope of the approach?
Review
Review The authors break the double blind anonymity with the code link provided. I'll leave how to deal with this to the meta reviewer. The authors provide a method to modify GRFs to be used for classification. The idea is simple and easy to get through, the writing is clean. The method boils down to using a latent variable that acts as a "pseudo-regressor" that is passed through a sigmoid for classification. The authors then discuss learning and inference in the proposed model, and propose two different variants that differ on scalability and a bit on performance as well. The idea of using the \xi transformation for the lower bound of the sigmoid was interesting to me -- since I have not seen it before, its possible its commonly used in the field and hopefully the other reviewers can talk more about the novelty here. The empirical results are very promising, which is the main reason I vote for weak acceptance. I think the paper has value, albeit I would say its a bit weak on novelty, and I am not 100% convinced about the this conference being the right fit for this paper. The authors augment MRFs for classification and evaluate and present the results well. Can the authors intuit why random forests and neural nets dont perform as well ? It seems there are many knobs one can tune to get better performance, so I will take the presented results with a grain of salt. Also, it seems one can also use other "link" functions with MRFs (similar to link functions in generalized linear models) to not just do logistic but other possible losses as well. How about multiclass classification using softmax ? I think such generalizations would make this paper lot stronger.
ICLR
Title No Pairs Left Behind: Improving Metric Learning with Regularized Triplet Objective Abstract We propose a novel formulation of the triplet objective function that improves metric learning without additional sample mining or overhead costs. Our approach aims to explicitly regularize the distance between the positive and negative samples in a triplet with respect to the anchor-negative distance. As an initial validation, we show that our method (called No Pairs Left Behind [NPLB]) improves upon the traditional and current state-of-the-art triplet objective formulations on standard benchmark datasets. To show the effectiveness and potentials of NPLB on real-world complex data, we evaluate our approach on a large-scale healthcare dataset (UK Biobank), demonstrating that the embeddings learned by our model significantly outperform all other current representations on tested downstream tasks. Additionally, we provide a new model-agnostic single-time health risk definition that, when used in tandem with the learned representations, achieves the most accurate prediction of subjects’ future health complications. Our results indicate that NPLB is a simple, yet effective framework for improving existing deep metric learning models, showcasing the potential implications of metric learning in more complex applications, especially in the biological and healthcare domains. Our code package as well as tutorial notebooks is available on our public repository: <revealed after the double blind reviews>. 1 INTRODUCTION Metric learning is the task of encoding similarity-based embeddings where similar samples are mapped closer in space and dissimilar ones afar (Xing et al., 2002; Wang et al., 2019; Roth et al., 2020). Deep metric learning (DML) has shown success in many domains, including computer vision (Hermans et al., 2017; Vinyals et al., 2016; Wang et al., 2018b) and natural language processing (Reimers & Gurevych, 2019; Mueller & Thyagarajan, 2016; Benajiba et al., 2019). Many DML models utilize paired samples to learn useful embeddings based on distance comparisons. The most common architectures among these techniques are the Siamese (Bromley et al., 1993) and triplet networks (Hoffer & Ailon, 2015). The main components of these models are the: (1) Strategies for constructing training tuples and (2) objectives that the model must minimize. Though many studies have focused on improving sampling strategies (Wu et al., 2017; Ge, 2018; Shrivastava et al., 2016; Kalantidis et al., 2020; Zhu et al., 2021), modifying the objective function has attracted less attention. Given that learning representations with triplets very often yield better results than pairs using the same network (Hoffer & Ailon, 2015; Balntas et al., 2016), our work focuses on improving triplet-based DML through a simple yet effective modification of the traditional objective. Modifying DML loss functions often requires mining additional samples or identifying new quantities (e.g. identifying class centers iteratively throughout training (He et al., 2018)) or computing quantities with costly overheads (Balntas et al., 2016), which may limit their applications. In this work, we aim to provide an easy and intuitive modification of the traditional triplet loss that is motivated by metric learning on more complex datasets, and the notion of density and uniformity of each class. Our proposed variation of the triplet loss leverages all pairwise distances between existing pairs in traditional triplets (positive, negative, and anchor) to encourage denser clusters and better separability between classes. This allows for improving already existing triplet-based DML architectures using implementations in standard deep learning (DL) libraries (e.g. TensorFlow), enabling a wider usage of the methods and improvements presented in this work. Many ML algorithms are developed for and tested on datasets such as MNIST (LeCun, 1998) or ImageNet (Deng et al., 2009), which often lack the intricacies and nuances of data in other fields, such as health-related domains (Lee & Yoon, 2017). Unfortunately, this can have direct consequences when we try to understand how ML can help improve care for patients (e.g. diagnosis or prognosis). In this work, we demonstrate that DML algorithms can be effective in learning embeddings from complex healthcare datasets. We provide a novel DML objective function and show that our model’s learned embeddings improve downstream tasks, such as classifying subjects and predicting future health risk using a single-time point. More specifically, we build upon the DML-learned embeddings to formulate a new mathematical definition for patient health-risks using a single time point which, to the best of our knowledge, does not currently exist. To show the effectiveness of our model and health risk definition, we evaluate our methodology on a large-scale complex public dataset, the UK Biobank (UKB) (Bycroft et al., 2018), demonstrating the implications of our work for both healthcare and the ML community. In summary, our most important contributions can be described as follows. 1) We present a novel triplet objective function that improves model learning without any additional sample mining or overhead computational costs. 2) We demonstrate the effectiveness of our approach on a large-scale complex public dataset (UK Biobank) and on conventional benchmarking datasets (MNIST, Fashion MNIST (Xiao et al., 2017) and CIFAR10 (Krizhevsky, 2010)). This demonstrates the potential of DML in other domains which traditionally may have been less considered. 3) We provide a novel definition of patient health risk from a single time point, demonstrating the real-world impact of our approach by predicting current healthy subjects’ future risks using only a single lab visit, a challenging but crucial task in healthcare. 2 BACKGROUND AND RELATED WORK Contrastive learning aims to minimize the distance between two samples if they belong to the same class (are similar). As a result, contrastive models require two samples to be inputted before calculating the loss and updating their parameters. This can be thought of as passing two samples to two parallel models with tied weights, hence being called Siamese or Twin networks (Bromley et al., 1993). Triplet networks (Hoffer & Ailon, 2015) build upon this idea to rank positive and negative samples based on an anchor value, thus requiring the model to produce mappings for all three before the optimization step (hence being called triplets). Modification of Triplet Loss: Due to their success and importance, triplet networks have attracted increasing attention in recent years. Though the majority of proposed improvements focus on the sampling and selection of the triplets, some studies (Balntas et al., 2016; Zhao et al., 2019; Kim & Park, 2021; Nguyen et al., 2022) have proposed modifications of the traditional triplet loss proposed in Hoffer & Ailon (2015). Similar to our work, Multi-level Distance Regularization (MDR) (Kim & Park, 2021) seeks to regularize the DML loss function. MDR regularizes the pairwise distances between embedding vectors into multiple levels based on their similarity. The goal of MDR is to disturb the optimization of the pairwise distances among examples and to discourage positive pairs from getting too close and the negative pairs from being too distant. A drawback of regularization methods is the choice of hyperparameter that balances the regularization term, though adaptive balancing methods could be used (Chen et al., 2018; Heydari et al., 2019). Most related to our work, Balntas et al. (2016) modified the traditional objective by explicitly accounting for the distance between the positive and negative pairs (which the traditional triplet function does not consider), and applied their model to learn local feature descriptors using shallow convolutional neural networks. They introduce the idea of "in-triplet hard negative", referring to the swap of the anchor and positive sample if the positive sample is closer to the negative sample than the anchor, thus improving on the performance of traditional triplet networks (we refer to this approach as Distance Swap). Though this method uses the distance between the positive and negative samples to choose the anchor, it does not explicitly enforce the model to regularize the distance between the two, which was the main issue with the original formulation. Our work addresses this pitfall by using the notion of local density and uniformity (defined later in §3) to explicitly enforce the regularization of the distance between the positive and negative pairs using the distance between the anchors and the negatives. As a result, our approach ensures better inter-class separability while encouraging denser intra-class embeddings. In addition to MDR and Swap Distance, we benchmark our approach againt three related and widely-used metric learning algorithms, namely LiftedStruct Song et al. (2015), N-Pair Loss Sohn (2016), and InfoNCE Oord et al. (2018a). Due to the space constraints, and given the popularity of the methods, we provide an overview of these algorithms in Appendix E Deep Learned Embeddings for Healthcare: Recent years have seen an increase in the number of DL models for Electronic Health Records (EHR) with several methods aiming to produce rich embeddings to better represent patients (Rajkomar et al., 2018; Choi et al., 2016b; Tran et al., 2015; Nguyen et al., 2017; Choi et al., 2016a; Pham et al., 2017). Though most studies in this area consider temporal components, DeepPatient (Miotto et al., 2016) does not explicitly account for time, making it an appropriate model for comparison with our representation learning approach given our goal of predicting patients’ health risks using a single snapshot. DeepPatient is an unsupervised DL model that seeks to learn general deep representations by employing three stacks of denoising autoencoders that learn hierarchical regularities and dependencies through reconstructing a masked input of EHR features. We hypothesize that learning patient reconstructions alone (even with masking features) does not help to discriminate against patients based on their similarities. We aim to address this by employing a deep metric learning approach that learns similarity-based embeddings. Predicting Patient’s Future Health Risks: Assessing patients’ health risk using EHR remains a crucial, yet challenging task of epidemiology and public health (Li et al., 2015). An example of such challenges are the clinically-silent conditions, where patients fall within "normal" or "borderline" ranges for specific known blood work markers, while being at the risk of developing chronic conditions and co-morbidities that will reduce quality of life and cause mortality later on (Li et al., 2015). Therefore, early and accurate assessment of health risk can tremendously improve the patient care, specially in those who may appear "healthy" and do not show severe symptoms. Current approaches for assessing future health complications tie the definition of health risks to multiple time points (Hirooka et al., 2021; Chowdhury & Tomal, 2022; Razavian et al., 2016; Kamal et al., 2020; Cohen et al., 2021; Che et al., 2017). Despite the obvious appeal of such approaches, the use of many visits for modeling and defining risk simply ignores a large portion of patients who do not return for subsequent check ups, especially those with lower incomes and those without adequate access to healthcare (Kullgren et al., 2010; Taani et al., 2020; Nishi et al., 2019). Given the importance of addressing these issues, we propose a mathematical definition (that is built upon DML) based on a single time point, which can be used to predict patient health risk from a single lab visit. 3 METHODS Main Idea of No Pairs Left Behind (NPLB): The main idea behind our approach is to ensure that, during optimization, the distance between positive pi and negative samples ni is considered, and regularized with respect to the anchors ai (i.e. explicitly introducing a notion of distance between d(pi, ni) which depends on d(ai, ni)). We visualize this idea in Fig. 1. The mathematical intuition behind our approach can be described by considering in-class local density and uniformity, as introduced in Rojas-Thomas & Santos (2021) for unsupervised clustering evaluation metric. Given a metric learning model φ, let local density of a sample pi be defined as LD(pi)pi∈ck = min{d(φ(pi), φ(pj))}, ∀pi ∈ ck and i 6= j, and let AD(ck) be the average local density of all point in class ck. An ideal operator φ would produce embeddings that are compact while well separated from other classes, or that the in-class embeddings are uniform. This notion of uniformity, is proportional to the difference between the local and average density of each class, i.e. Unif(ck) = {∑|ck| i |LD(pi)−AD(ck)| AD(ck)+ξ if |ck| > 1 0 Otherwise . for 0 < ξ 1. However, computing density and uniformity of classes is only possible post-hoc once all labels are present and not feasible during training if the triplets are mined in a self-supervised manner. To reduce the complexity and allow for general use, we utilize proxies for the mentioned quantities to regularize the triplet objective using the notion of uniformity. We take the distance between positive and negative pairs as inversely proportional to the local density of a class. Similarly, the distance between anchors and negative pairs is closely related to the average density, given that a triplet model maps positive pairs inside an ◦-ball of the anchor ( ◦ being the margin). In this sense, the uniformity of a class is inversely proportional to |d(φ(pi), φ(ni))−d(φ(ai), φ(ni))|. NPLB Objective: Let φ(·) denote an operator and T be the set of triplets of the form (pi, ai, ni) (positive, anchor and negative tensors) sampled from a mini-batch B with size N . For the ease of notation, we will write φ(qi) as φq. Given a margin ◦ (a hyperparameter), the traditional objective function for a triplet network is shown in Eq. (1): LTriplet = 1 N N∑ (pi,ai,ni)∈T [d(φa, φp)− d(φa, φn) + ◦]+ (1) with [·]+ = max{·, 0} and d(·) being the Euclidean distance. Minimizing Eq. (1) only ensures that the negative pairs fall outside of an ◦-ball around the ai, while bringing the positive sample pi inside of this ball (illustrated in Fig. 1), satisfying d(φa, φn) > d(φa, φp) + ◦. However, this objective does not explicitly account for the distance between positive and negative samples, which can impede performance especially when there exists high in-class variability. Motivated by our main idea of having denser and more uniform in-class embeddings, we add a simple regularization term to address the issues described above, as shown in Eq. (2) LNPLB = 1 N N∑ (pi,ai,ni)∈T [d(φa, φp)− d(φa, φn) + ◦]+ + [d(φp, φn)− d(φa, φn)]p , (2) where p ∈ N and NPLB refers to "No Pairs Left Behind." The regularization term in Eq. (2) enforces positive and negative samples to be roughly the same distance away as all other negative pairings, while still minimizing their distance to the anchor values. However, if not careful, this approach could result in the model learning to map ni such that d(φa, φp) > max{ ◦, d(φp, φn)}, which would ignore the triplet term, resulting in a minimization problem with no lower bound1. To avert such issues, we restrict p = 2 (or generally, p ≡ 0 (mod 2)) as in Eq. (3). LNPLB = 1 N N∑ (pi,ai,ni)∈T [d(φa, φp)− d(φa, φn) + ◦]+ + [d(φp, φn)− d(φa, φn)]2 , (3) Note that this formulation does not require mining of any additional samples nor complex computations since it just uses the existing samples in order to regularize the embedded space. Moreover LNPLB = 0 =⇒ − [d(φp, φn)− d(φa, φn)]2 = [d(φa, φp)− d(φa, φn) + ◦]+ which, considering only the real domain, is possible if and only if d(φp, φn) = d(φa, φn), and d(φa, φn) ≥ d(φa, φp) + ◦, explicitly enforcing separation between negative and positive pairs. 1The mentioned pitfall can be realized by taking p = 1, i.e. L(pi, ai, ni) = 1 N N∑ (pi,ai,ni)∈T [d(φa, φp)− d(φa, φn)) + ◦]+ + [d(φp, φn)− d(φa, φn)]. In this case, the model can learn to map ni and ai such that d(φa, φn) > C where C = max{d(φp, φn), d(φa, φp) +m}, resulting in L < 0. 4 VALIDATION OF NPLB ON STANDARD DATASETS Prior to testing our methodology on healthcare data, we validate our derivations and intuition on common benchmark datasets, namely MNIST, Fashion MNIST and CIFAR10. To assess the improvement gains from the proposed objective, we refrained from using more advanced triplet construction techniques and followed the most common approach of constructing triplets using the labels offline. We utilized the same architecture and training settings for all experiments, with the only difference per dataset being the objective functions (see Appendix K for details on each architecture). After training, we evaluated our approach quantitatively through assessing classification accuracy of embeddings produced by optimizing the traditional triplet, Swap Distance and our proposed NPLB objective. The results for each data are presented in Table 1, showing that our approach improves classification. We also assessed the embeddings qualitatively: Given the simplicity of MNIST, we designed our model to produce two-dimensional embeddings which we directly visualized. For Fashion MNIST and CIFAR, we generated embeddings in R64 and R128, respectively, and used Uniform Manifold Approximation and Projection (UMAP) (McInnes et al., 2018) to reduce the dimensions for visualizations, as shown in Fig. 2. Our results show that networks trained on NPLB objective produce embeddings that are denser and well separated in space, as desired. 5 IMPROVING PATIENT REPRESENTATION LEARNING In this section, we aim to demonstrate the potential and implications of our approach on a more complex dataset in three steps: First, we show that deep metric learning improves upon current state-of-the-art patient embedding models (§5.1). Next, we provide a comparison between NPLB, Distance Swap and the traditional triplet loss formulations (§5.2). Lastly, we apply our methodology to predict health risks of currently healthy subjects from a single time point (§5.3). We focus on presenting results for the female subjects due to space limitations. We note that results on male subjects are very similar to the female population, as presented in Appendix I. 5.1 DEEP METRIC LEARNING FOR BETTER PATIENT EMBEDDINGS Healthcare datasets are considerably different than those in other domains. Given the restrictions on sharing health-related data (as stipulated by laws such as those defined under the Health Insurance Portability and Accountability Act - HIPAA), most DL-based models are developed and tested on proprietary in-house datasets, making comparisons and benchmarking a major hurdle (Evans, 2016). This is in contrast to other areas of ML which have established standard datasets (e.g. ImageNet or GLUE (Wang et al., 2018a)). To show the feasibility of our approach, we present the effectiveness of our methodology on the United Kingdom Biobank (UKB) (Bycroft et al., 2018): a large-scale (∼ 500K subjects) complex public dataset, showing the potential of UKB as an additional benchmarking that can be used for developing and testing future DL models in the healthcare domain. UKB contains deep genetic and phenotypic data from approximately 500,000 individuals aged between 39-69 across the United Kingdom, collected over many years. We considered patients’ lab tests and their approximated activity levels (e.g. moderate or vigorous activity per week) as predictors (complete list of features used is shown in Appendix M), and their doctor-confirmed conditions and medication history for determining labels. Specifically, we labeled a patient as "unhealthy" if they have confirmed conditions or take medication for treating a condition, and otherwise labeled them as "apparently-healthy". We provide a step-by-step description of our data processing in Appendix F. A close analysis of the UKB data revealed large in-class variability of test ranges, even among those with no current or prior confirmed conditions (the "apparently-healthy" subjects). Moreover, the overall distribution of key metrics are very similar between the unhealthy and apparently-healthy patients (visualized in Appendix G). As a result, we hypothesized that there exists a continuum among patients’ health states, leading to our idea that a similarity-based learned embedding can represent subjects better than other representations for downstream tasks. This idea, in tandem with our assumption of intricate nonlinear relationships among features, naturally motivated our approach of deep metric learning: our goal is to train a model that learns a metric for separating patients in space, based on their similarities and current confirmed conditions (labels). Due to our assumptions, we used the apparently-healthy patients initially as anchor points between the two ends of the continuum (the very unhealthy and healthy). However, this formulation necessitates identifying a more "reliable" healthy group, often referred to as the bona fide healthy (BFH) group (Cohen et al., 2021)2. To find the BFH population, we considered all patients whose key lab tests for common conditions fall within the clinically-normal values. These markers are: Total Cholesterol, HDL Cholesterol, LDL Cholesterol, Triglycerides, Fasting Glucose, HbA1c, C-Reactive Protein.; we refer to this set of metrics as the P0 metrics and provide the traditional "normal" clinical ranges in Appendix H. It is important to note that the count of the BFH population is much smaller than the apparently-healthy group (∼ 6% and ∼ 5% of female and male populations, respectively). To address this issue and to keep DML as the main focus, we implemented a simple yet intuitive rejection-based sampling to generate synthetic BFH patients, though more sophisticated methods could be employed in future work. Similar to any other rejection-based sampling and given that lab results often follow a Gaussian distribution (Whyte & Kelly, 2018), we assumed that each feature follows a distribution N (µx, σx) where µx and σx denote the empirical mean and standard deviation of feature xi for all patients. Since BFH patients are selected if their P0 biosignals fall within the clinically-normal lab ranges, we used the bounds of the clinically normal range as the accept/reject criteria. Our simple rejection-based sampling scheme is presented in Appendix L. Training Procedure and Model Architecture: Before training, we split the data 70% : 30% for training and testing. In the training partition, we augment the bona fide population 3 folds and then generate 100K triplets of the form (ai, pi, ni) randomly in an offline manner. We chose to generate 2Although it is possible to further divide each group (e.g. based on conditions), we chose to keep the patients in three very general groups to show the feasibility of our approach in various health-related domains. only 100K triplets in order to reduce training time and demonstrate the capabilities of our approach for smaller datasets. Note that in an unsupervised setting, these triplets need to be generated online via negative sample mining, but this is out of scope for this work given that we have labels a priori . Our model consists of the three hidden layers with two probabilistic dropout layers in between and Parametric Rectified Linear Units (PReLU) (He et al., 2015) nonlinear activations. We present a visual representation of our architecture and the dimensions in Fig. A5. We optimize the weights for minimizing our proposed NPLB objective, Eq.(3), using Adam (Kingma & Ba, 2014) for 1000 epochs with lr = 0.001, and employ an exponential learning rate decay (γ = 0.95) to decrease the learning rate after every 50 epochs, and set the triplet margin to ◦ = 1. For simplicity, we will refer to our model as SPHR (Similarity-based Patient Risk modeling). In order to test the true capability of our model, all evaluations are performed on the non-augmented data. Results: Similar to §4, we evaluated our deep-learned embeddings on its improvements for binary (unhealthy or apparently-healthy) and multi-class (unhealthy, apparently-healthy or bona fide healthy) classification tasks. The idea here is that if SPHR has learned to separate patients based on their conditions and similarities, then training classifiers on SPHR-produced embeddings should show improvements compared to raw data. We trained five classifiers (k-nearest neighbors (KNN), Linear Discriminate Analysis (LDA), a neural network (NN) for EHR (Chen et al. (2020a)), and XGBoost Chen & Guestrin (2016)) on raw data (not-transformed) and other common transformations. These transformations include a linear transformation (Principal Component analysis [PCA]), a non-linear transformation (Diffusion Maps (Coifman & Lafon (2006) [DiffMap]) and the current state-of-the-art nonlinear transformation, DeepPatient. DeepPatient, PCA, DiffMap, and SPHR are Rn → Rd, where n, d ∈ N denote the initial number of features and a (reduced) mapping space, respectively, with d = 32 (in order to have the same dimensionality as SPHR), though various choices of d yielded similar results. We present these results in Table 2 comparing classification weighted F1 score for models trained on raw EHR, linear and nonlinear transformations. We also evaluate the separability qualitatively using UMAP, as shown in Fig. A1. In all tested cases, our model significantly outperforms all other transformations, demonstrating the effectiveness of DML in better representing patients from EHR. 5.2 NPLB SIGNIFICANTLY IMPROVES LEARNING ON COMPLEX DATA One of our main motivations for modifying the triplet object was to improve model performance on more complex datasets with larger in-class variability. To evaluate the improvements provided by our NPLB objective, we perform the same analysis as in §4, but this time on the UKB data. Table 3 demonstrate the significant improvement made by our simple modification to the traditional triplet loss, further validating our approach and formulation experimentally. 5.3 PREDICTING HEALTH RISKS FROM A SINGLE LAB VISIT Definition of Single-Time Health Risk: Predicting patients’ future health risks is a challenging task, especially when using only a single lab visit. As described in §2, all current models use multiple assessments for predicting health risks of a patient; however, these approaches ignore a large portion of the population who do not return for additional check-ups. Motivated by the definition of risk in other fields (e.g. risk of re-identification of anonymized data (Ito & Kikuchi, 2018)), we provide a simple and intuitive distance-based definition of health risk to address the mentioned issues and that is well suited for DML embeddings. Given the simplicity of our definition and due to space constraints, we describe the definition below and outline the mathematical framework in Appendix B. We define the health distance as the euclidean distance between a subject and the reference bona fide healthy (BFH) subject. Many studies have shown the large discrepancy of lab metrics among different age groups and genders (Cohen et al., 2021). To account for these known differences, we identify a reference vector which is the median BFH subject from each age group per gender. Moreover, for simplicity and interpretability, we define health risk as discrete groups using the known BFH population: For each stratification g (age and sex), we identify two BFH subjects who are at the 2.5 and 97.5 percentiles (giving us the inner 95% of the distribution), and calculate their distance to the corresponding reference vector, This gives us a distance interval [tg2.5, t g 97.5]. In a group g, any new patient whose distance to the reference vector falls inside the corresponding [tg2.5, t g 97.5] is considered to be "Normal". Similarly, we identify the [tg1, t g 99] intervals (corresponding to the inner 98% of BFH group); any new patient whose health distance is within [tg1, t g 99] but not in [t g 2.5, t g 97.5] is considered to be in the "Lower Risk" (LR) group. Lastly, any patient with health distance outside of these intervals is considered to have "Higher Risk" (HR). The UKB data is a good candidate for predicting potential health risks, given that it includes subsequent follow-ups where a subset of patients are invited for a repeat assessment. The first follow-up was done between 2012-2013 and it included approximately 20,000 individuals (Littlejohns et al., 2019) (25× reduction with many measurements missing). Based on our goal of predicting risk from a single visit, we only include the patients’ first visits for modeling, and use the 2012-2013 follow-ups for evaluating the predictions. Results: We utilized the single-time health risk definition to predict a patient’s future potentials of health complications. To demonstrate the versatility of our approach, we predicted general health risks that were used for all health conditions available (namely Cancer, Diabetes and Other Serious Conditions); however, we hypothesize that constraining health risks based on specific conditions will improve risk predictions. We considered five methods for assigning a risk group to each patient: (i) Euclidean distance on raw data (preprocessed but not transformed), (ii) Mahalanobis distance on pre-processed data, (iii) Euclidean distance on the key metrics (P0) for the available conditions(described in section 5.1), therefore hand-crafting features and reducing dimensionality in order to achieve an upper bound performance for most traditional methods (though this will not be possible for all diseases). The last two methods we consider are deep representation learning methods: (iv) DeepPatient and (v) SPHR embeddings (proposed model). We use the Euclidean metric for calculating the distance between deep-learned representations of (iv) and (v). For all approaches, we assigned all patients to one of the three risk groups using biosignals from a single visit, and calculated the percentage of patients who developed a condition in the immediate next visit. Intuitively, patients who fall under the "Normal" group should have fewer confirmed cases compared to subjects in "Lower Risk" or "Higher Risk" groups. Table 4 shows the results for the top three methods, with our approach consistently matching the intuitive criteria: among the five methods, SPHR-predicted patients in the Normal risk group have the fewest instance of developing future conditions, while the ones predicted as High risk have the highest instance of developing future conditions (Table 4). 6 CONCLUSION AND DISCUSSION We present a simple and intuitive variation of the traditional triplet loss (NPLB) which regularizes the distance between positive and negative samples based on the distance between anchor and negative pairs. To show the general applicability of our methods and as an initial validation step, we tested our model on three standard benchmarking datasets (MNIST, Fashion MNIST and CIFAR-10) and found that our NPLB-trained model produced better embeddings. To demonstrate the real world impact of DML, such as our proposed framework called SPHR, we applied our methodology on the UKB to classify patients and predict future health risk for the current healthy patients, using only a single time point. Motivated by risk prediction in other domains, we provide a distance-based definition of health risk. Utilizing this definition, we partitioned patients into three health risk groups (Normal, Lower Risk and Higher Risk). Among all methods, SPHR-predicted Higher Risk healthy patients had the highest percentage of actually developing conditions in the next visit, while SPHR-predicted Normal patients had the lowest instances, which is desired. Although the main point of our work focused on modifying the objective function, a limitation of our work is the simple triplet sampling that we employed, particularly when applied to healthcare. We anticipate additional improvement gains by employing online triplet sampling or extending our work to be self-supervised (Chen et al., 2020b; Oord et al., 2018b; Wang et al., 2021). The implications of our work are threefold: (1) Our proposed objective has the potential of improving existing triplet-based models without requiring additional sample mining or computationally intensive operations. We anticipate that combining our work with existing triplet sampling can further improve model learning and results. (2) Models for predicting patients’ health risks are nascent and often require time-series data. Our experiments demonstrated the potential improvements gained by developing DML-based models for learning patient embeddings, which in turn can improve patient care. Our results show that more general representation learning models are valuable in pre-processing EHR data and producing deep learned embeddings that can be used (or fine-tuned) for more specific downstream analyses. We believe additional analysis on the learned embedding space can prove to be useful for various tasks. For example, we show that there exists a relationship between distances in the embedded space and the time to develop a condition, which we present in Appendix C. The rapid growth of healthcare data, such as EHR, necessitates the use of large-scale algorithms that can analyze such data in a scalable manner (Evans, 2016). Currently, most applications of ML in healthcare are formulated for small-scale studies with proprietary data, or use the publicly-available MIMIC dataset (Miotto et al., 2016; Johnson et al., 2016), which is not as large-scale and complex as the UKB. Similar to our work, we believe that future DL models can benefit from using the UKB for development and benchmarking. (3) Evaluating health risk based on a single lab visit can enable clinicians to flag high-risk patients early on, potentially reducing the number (and the scope) of costly tests and significantly improving care for the most vulnerable individuals in a population. REPRODUCIBILITY Our code package and tutorial notebooks are all publicly available at on the authors’ Github at: <revealed after the double blind reviews>, and we will actively monitor the repository for any issues or improvements suggested by the users. Moreover, we have designed our Appendix to be a comprehensive guide for reproducing our results and experiments as well. Our Appendix includes all used features from the UK Biobank (for male and female patients), a complete list of model parameters (for classification models), detailed definition of single-time health risk as well as pseudocode and description of architectures we developed for our experiments. ACKNOWLEDGMENTS Will be added after double blind reviews. Appendix APPENDIX A RUNTIME AND SCALING ANALYSIS OF SPHR To measure the scalability of SPHR, we generated a random dataset consisting of 100,000 samples with 64 linearly-independent (also known as "informative") features and 10 classes, resulting in a matrix X ∈ R100000×64. From this dataset, we then randomly generated varying number of triplets following the strategy described in the main manuscript and in Hoffer & Ailon (2015), representing a computationally intensive case (as opposed to more intricate and faster mining schemes). The average of five mining runs is presented as "Avg. Mining Time" column in Table A1. Next, we trained SPHR on the varying number of triplets five times using (1) A Google Compute Engines with 48 logical cores (CPU) (2) A Google Virtual Machine equipped with one NVIDIA V100 GPU (referred to as GPU). The average training times for CPUs and GPUs are shown in Table A1. Table A1: Average training time of the proposed deep learning model (SPHR). All experiments are done under the same computational The times shown below are the average of 5 runs with all identical settings. Number of Triplets Avg. CPU Training Time Avg. GPU Training Time Avg. Mining Time 1,000 8.28 Mins 3.95 Mins <0.01 Mins 5,000 15.07 Mins 5.96 Mins 0.01 Mins 10,000 31.02 Mins 7.60 Mins 0.12 Mins 50,000 76.24 Mins 16.79 Mins 0.65 Mins 100,000 133.26 Mins 27.15 Mins 1.27 Mins 500,000 512.75 Mins 94.79 Mins 5.58 Mins APPENDIX B MATHEMATICAL DEFINITION OF SINGLE-TIME HEALTH RISK In this section, we aim to provide a mathematical definition of health risk that can measure similarity between a new cohort and an existing bona fide healthy population without requiring temporal data. This definition is inspired by the formulation of risk in other domains, such as defining the risk of re-identification of anonymized data (Ito & Kikuchi, 2018). We first define the notion of a "health distance", and use that to formulate thresholds intervals which enable us to define health risk as a mapping between continuous values and discrete risk groups. Definition Appendix B.1. Given a space X , a bona fide healthy population distribution B and a new patient p (all in X), the health score sn of p is: sq(p) = d(Pq(B), p) where Pq(H) denotes the qth percentile of B, and d(·) refers to a metric defined on X (potentially a pseudo metric). Definition Appendix B.1 provides a measure of distance between a new patient and an existing reference population, which can be used to define similarity. For example, if d(·) is the Euclidean distance, then the similarity between patient x and y is sim(x, y) = 11+d(x,y) . Next, using this notion of distance (and similarity), we define risk thresholds that allow for grouping of patients. Definition Appendix B.2. A threshold interval Iq = [tlq, tuq ] is defined as the distance between the vectors providing the inner q percent of a distribution H and the median value. Let n = (100− q)/2, then we have Iq as: Iq = [t l q, t u q ] = [d(Pn(H), P50(H)), d(Pq+n(H), P50(H))] . Additionally, for the sake of interpretability and convenience, we can define health risk groups using known Iq values which we define below. Definition Appendix B.3. Let M : Rd → V ∪ {η} where d denotes the number of features defining a patient, with η being discrete group and V denoting pre-defined risk groups based on k ∈ N many Figure A1: Qualitative assessment of our approach on UKB female patients. The NPLB-trained network better separates subjects, resulting in significant improvements in classification of patients, as shown in Table 3. Note the continuum of patients for both metric learning techniques, especially among the apparently-healthy patients (in yellow). intervals (i.e. |V | = |Iq| = k). Using the same notion of health distance sn and threshold interval Iq as before, we define a patient p’s health risk group as: M(p) = { Vq if sq(p) ∈ Iq η otherwise . We utilize our deep metric learning model and the definitions above in tandem to predict health risks. That is, we first produce embeddings for all patients using our learned nonlinear operator G, and then use the distance between the bona fide healthy (BFH) population and new patients to assign them a risk group. Mathematically, given the set of BFH population for all groups, i.e. BAll = {B[36,45], B[46,50], B[51,55], B[56,60], B[61,65], B[66,75]}, we take the reference value per age group to be r̃age = G(P50(Bage)), where Bage denotes the BFH population for the age group age. Then, using definition Appendix B.2, we define: Nage = I age 95 = [d(P2.5(Bage), P50(Bage)), d(P97.5(Bage), P50(Bage))] (A.4a) LRage = I age 98 = [d(P1(Bage), P50(Bage)), d(P99(Bage), P50(Bage))] . (A.4b) Note that Nage ⊂ LRage. Lastly, using the intervals defined in Eq. (A.4), we define the mapping M as shown in Eq. (A.5). Mage(p) = Normal if sn(G(p)) = d(G(p), r̃age) ∈ Nage Lower Risk if sn(G(p)) = d(G(p), r̃age) ∈ LRage\Nage Higher Risk otherwise . (A.5) APPENDIX C PREDICTING PATIENT’S HEALTH RISK IN TIME Given the performance of our DML model in classifying subjects and health risk assessment, we hypothesized that we can retrieve a relationship between spatial distance (in the embedded space) and a patient’s time to develop a condition using only a single lab visit. Let us assume that subjects start from a "healthy" point and move along a trajectory (among many trajectories) to ultimately become "unhealthy" (similar to the principle of entropy). In this setting, we hypothesized that our model maps patients in space based on their potential of moving along the trajectory of becoming unhealthy. To test our hypothesis, we designed an experiment to use our distance-based definition of health risk, and the NPLB embeddings to further stratify patients based on the immediacy of their health risk (time). More specifically, we investigated the correlation between spatial locations of the currently-healthy patients in the embedded space and the time to which they develop a condition. Similar to the health risk prediction experiment in the main manuscript, we computed the distance between each patient’s embedding and the corresponding reference value in the ultra healthy population (refer to the main manuscript for details on this procedure). We then extracted all healthy patients at the time of first visit who returned in (2012-2013) and 2014 (and after) for reassessment or imaging visits. It is important to again note that there is a significant drop in the number of returning patients for subsequent visits. Among the retrieved patients, we calculated the number of individuals who developed Cancer, Diabetes or Other Serious Conditions. We found strong negative correlations between the calculated health score (distances) and time of diagnosis for Other Serious Conditions (r = −0.72, p = 0.00042) and Diabetes (r = −0.64, p = 0.0071), with Cancer having the least correlation r = −0.20 and p = 0.025. Note that the lab tests used as predictors are associated with diagnosing metabolic health conditions and less associated with diagnosing cancer, which could explain the low correlation between health score and time of developing cancer. These results indicate that the metric learned by our model accounts for the immediacy of health risk, mapping patient’s who are at a higher risk of developing health conditions farther from those who are at a lower risk (hence the negative correlation). APPENDIX D : NPLB CONDITION IN MORE DETAIL In this section, we aim to take a closer look at the minimizer of our proposed objective. Using the same notations as in the main manuscript, we define the following variables for convenience: δ+ , d(φa, φp) δ− , d(φa, φn) ρ , d(φp, φn) With this notation, we can rewrite our proposed No Pairs Left Behind objective as: LNPLB = 1 N N∑ (pi,ai,ni)∈T [δ+ − δ− + 0]+ + (ρ− δ−)2 . (A.7) Note that the since [δ+ − δ− + 0]+ ≥ 0,LNPLB = 0 if and only the summation of each term is identically zero. This yields the following relation: − (ρ− δ−)2 = [δ+ − δ− + 0]+ (A.8) which, considering the real solutions, is only valid if ρ = δ−, and if δ− > δ+ + 0, and therefore ρ > δ+ + 0. As a result, the regularization term enforces that the distance between the positive and the negative to be at least as much δ+ + 0, leading to denser clusters that are better separated from other classes in space. NPLB can be very easily implemented using existing implementations in standard libraries. As an example, we provide a Pytorch-like pseudo code showing the implementation of our approach: 1 from typing import Callable, Optional 2 import torch 3 4 class NPLBLoss(torch.nn.Module): 5 6 def __init__(self, 7 triplet_criterion: torch.nn.TripletMarginLoss, 8 metric: Optional[Callable[ 9 [torch.Tensor, torch.Tensor], 10 torch.Tensor]] = torch.nn.functional.pairwise_distance ): 11 """Initializes the instance with backbone triplet and distance metric .""" 12 super().__init__() 13 self.triplet = triplet_criterion 14 self.metric = metric 15 16 def forward(self, anchor: torch.Tensor, positive: torch.Tensor, 17 negative: torch.Tensor) -> torch.Tensor: 18 """Forward method of NPLB loss.""" 19 20 # Traditional triplet as the first component of the loss function. 21 triplet_loss = self.triplet(anchor, positive, negative) 22 # This 23 positive_to_negative = self.metric(positive, negative, keepdim=True) 24 anchor_to_negative = self.metric(anchor, negative, keepdim=True) 25 # Here we use the reduction to be ’mean’, but it can be any kind 26 # that the DL library would support. 27 return triplet_loss + torch.mean( 28 torch.pow((positive_to_negative - anchor_to_negative), 2)) Listing 1: Pytorch implementation of NPLB. APPENDIX E LIFTEDSTRUCT, N-PAIRS LOSS, AND INFONCE In this section, we provide a brief description of three popular deep metric learning models that are related to our work. We also describe the implementations used for these models, and present a complete comparison of all methods on all datasets in this work. In LiftedStruct (Song et al., 2015), the authors propose to take advantage of the full batch for comparing pairs as opposed to traditional approaches where positive and negative pairs are pre-defined for an anchor. The authors describe their approach as "lifting" the vector of pairwise distances for each batch to the matrix of pairwise distances. N-Pair Loss (Sohn, 2016) is a generalization of the traditional triplet loss which aims to address the "slow" convergence of traditional triplet models through considering N − 1 negative examples instead of the one negative pair considered in the traditional approach. InfoNCE (Oord et al., 2018a) is a generalization of N-pair loss that is also known as the normalized temperature-scaled cross entropy loss (NT-Xent). This loss aims to maximize the agreement between positive samples. Both N-Pair and InfoNCE loss relate to our work due to their formulation of the metric learning objective that are closely related to Triplet loss. To compare our approach against these algorithms, we leveraged the widely used Pytorch Metric Learning (PML) package (Musgrave et al., 2020). The complete results of our comparisons on all tested datasets are shown below in Table A2. Table A2: Comparison of state-of-the-art (SOTA) triplet losses with our proposed objective function (complete version of Tables 1 and 3 of the main manuscript). The classifications were done on the embeddings using XGBoost for five different train-test splits, with the average Weighted F1 score reported below. We note that the improved performance of the NPLB-trained model was consistent across different classifiers. The UKB results are for the multi-class classification. MNIST FashionMNIST CIFAR10 UKB (Females) UKB (Males) Trad. Triplet Loss 0.9859 ± 0.0009 0.9394 ± 0.001 0.8036 ± 0.028 0.5874 ± 0.001 0.6861 ± 0.003 N-Pair 0.9863 ± 0.0003 0.9586 ± 0.003 0.7936 ± 0.034 0.6064 ± 0.004 0.6961 ± 0.002 LiftedStruct 0.9853 ± 0.0007 0.9495 ± 0.002 0.7946 ± 0.041 0.5994 ± 0.003 0.6989 ± 0.004 MDR 0.9886 ± 0.0003 0.9557 ± 0.003 0.8152 ± 0.027 0.6047 ± 0.005 0.6964 ± 0.002 InfoNCE 0.9858 ± 0.0002 0.9581 ± 0.004 0.8039 ± 0.026 0.6103 ± 0.002 0.6816 ± 0.003 Distance Swap 0.9891 ± 0.0003 0.9536 ± 0.001 0.8285 ± 0.022 0.5416 ± 0.004 0.6628 ± 0.002 NPLB (Ours) 0.9954 ± 0.0003 0.9664 ± 0.001 0.8475 ± 0.025 0.6642 ± 0.002 0.7845 ± 0.003 Figure A2: Visualization of the data processing scheme described in Section Appendix F. APPENDIX F UK BIOBANK DATA PROCESSING Given the richness and complexity of the UKB and the scope of this work, we subset the data to include patients’ age and gender (demographics), numerous lab metrics (objective features), Metabolic Equivalent Task (MET) scores for vigorous/moderate activity and self-reported hours of sleep (lifestyle) (complete list of features in Appendix M). Additionally, we leverage doctor-confirmed conditions as well as current medication to assess subjects’ health (assigning labels and not used as predictors). After selecting these features, we use the following scheme to partition the subjects (illustrated in Fig. A2): 1. Ensure all features are at least 75% complete (i.e. at least 75% of patients have a non-null value for that feature) 2. Exclude subjects with any null values 3. Split resulting data according to biological sex (male or female), and perform quantile normalization (as in Cohen et al. (2021)) 4. For each sex, partition patients into "unhealthy" (those who have at least one doctorconfirmed health condition or take medication for treating a serious condition) and "apparently healthy" population (who do not have any serious health conditions and do not take medications for treating such illnesses). This data is used for training our neural network. 5. Split patients into six different age groups: Each age group is constructed so that the number of patients in each group is on the same order, while the bias in the data is preserved (age groups are shown in Fig. A2). These age groups are used to determine age-specific references at the time of risk prediction. APPENDIX G : SIMILARITY OF DISTRIBUTIONS FOR KEY METRICS AMONG PATIENTS Although lab metric ranges seem very different at a first glance, look at the age-stratified ranges of tests show similarity among the apparently healthy and unhealthy patients. Additionally, if we further stratify the data based on lifestyle, the similarities between the two health groups becomes even more evident. The additional filtering is as follows: We identify the median sleep hours per group as well as identifying "active" and "less active" individuals. We define active as someone who is moderately active for 150 minutes or vigorously active for 75 minutes per week. We use the additional filtering to further stratify patients in each age group. Below in Fig. A3 and A4 we show examples of these similarities for two age groups chosen at random. These results motivated our approach in identifying the bona fide healthy population to be used as reference points. Figure A3: Distribution similarity of key lab metrics between apparently-healthy and unhealthy female patients. We present the violin plot for Total Cholestrol (left) and LDL Cholestrol (right) for patients between the ages of 36-45 (chosen at random). This figure aims to illustrate the similarity between these distributions based on lifestyle and age. That is, by stratifying the patients based on their sleep and activity we can see that health status alone can not separate the patients well, given the similarity in the signals. APPENDIX H NORMAL RANGES FOR KEY METRICS Below we provide a list of the current "normal" lab ranges for key metrics that determined the bona fide healthy population: Key Biomarker Gender Specific? Range for Males Range for Females Reference Total Cholesterol No ≤ 5.18 mmol L ≤ 5.18 mmol L Link to Reference 1, Link to Reference 2 HDL Yes ≥ 1 mmol L ≥ 1.3 mmol L Link to Reference 1, Link to Reference 2 LDL No ≤ 3.3 mmol L ≤ 3.3 mmol L Link to Reference 1, Link to Reference 2 Triglycerides No ≤ 1.7 mmol L ≤ 1.7 mmol L Link to Reference 1, Link to Reference 2 Fasting Glucose No ∈ [70, 100] mg dL ∈ [70, 100] mg dL Link to Reference 3 HbA1c No < 42 mmol mol < 42 mmol mol Link to Reference 4 C-Reactive Protein No < 10 mg L < 10 mg L Link to Reference 5 APPENDIX I : PERFORMANCE OF SPHR ON MALE SUBJECTS In this section, we present the results of the experiments in the main manuscript (which were done for female patients) for the male patients. The classification results are presented in Tables A3, A4, A5, and A6, and the health risk predictions are shown in Table A7. Figure A4: Distribution similarity of key lab metrics between apparently-healthy and unhealthy female patients. We present the violin plot for HDL Cholestrol (left) and Triglycerides (right) for patients between the ages of 55-60 (age group chosen at random). This figure aims to illustrate the similarity between these distributions based on lifestyle and age. That is, by stratifying the patients based on their sleep and activity we can see that health status alone can not separate the patients well, given the similarity in the signals. Table A3: Comparison of binary classification performance (weighted F1 score) with various representations on the male patients. In this case, we consider the bona fide healthy patients as healthy patients and train each model to predict binary labels. We keep the same random seeds across different classifiers, and for the supervised methods, we randomly split the data into train and test (80-20) five time, and calculate the mean and standard deviation of the accuracies. Our model significantly improves the classification all tested classifiers, demonstrating better separability in space compared to raw data and state-of-the-art method (DeepPatient). Model Not-Transformed ICA PCA DeepPatient SPHR (Ours) KNNs 0.6200 ± 0.004 0.6167 ± 0.003 0.6077 ± 0.003 0.6224 ± 0.001 0.8163 ± 0.002 LDA 0.6275 ± 0.005 0.6227 ± 0.004 0.6227 ± 0.003 0.6385 ± 0.002 0.8141 ± 0.002 NN for EHR 0.5926 ± 0.014 0.6301 ± 0.018 0.6105 ± 0.021 0.6148 ± 0.032 0.8092 ± 0.011 XGBoost 0.5975 ± 0.004 0.5804 ± 0.004 0.6157 ± 0.004 0.6101 ± 0.004 0.8160 ± 0.003 APPENDIX J EFFECTS OF AUGMENTATION ON DML In order to evaluate the effect of augmenting the bona fide population and to determine the appropriate increase fold, we trained SPHR with different levels of augmentation, and evaluated the effect of each increase fold through multi-label classification performance. More specifically, created new augmented datasets with no augmentation, 1×, 3×, 5× and 10× and generated the same number of triplets (as described previously) and trained SPHR. We evaluated the multi-label classification using the same approach and classifiers as before (described in the main manuscript) and present the results of XGBoost classification in Table A8. Based on our findings and considerations for computational efficiency, we chose 3x augmentation as the appropriate fold increase. Table A4: Comparison of binary classification performance (micro F1 score) with various representations on the male patients. In this case, we consider the bona fide healthy patients as healthy patients and train each model to predict binary labels. We keep the same random seeds across different classifiers, and for the supervised methods, we randomly split the data into train and test (80-20) five time, and calculate the mean and standard deviation of the accuracies. Our model significantly improves the classification all tested classifiers, demonstrating better separability in space compared to raw data and state-of-the-art method (DeepPatient). Model Not-Transformed ICA PCA DeepPatient SPHR (Ours) KNNs 0.6490 ± 0.004 0.6480 ± 0.004 0.6469 ± 0.003 0.6639 ± 0.001 0.8185 ± 0.002 LDA 0.6529 ± 0.004 0.6509 ± 0.002 0.6509 ± 0.002 0.6664 ± 0.003 0.8138 ± 0.002 NN for EHR 0.6345 ± 0.003 0.6419 ± 0.003 0.6380 ± 0.003 0.6527 ± 0.005 0.8176 ± 0.004 XGBoost 0.6573 ± 0.002 0.6488 ± 0.003 0.6469 ± 0.003 0.6701 ± 0.003 0.8180 ± 0.003 Table A5: Comparison of multi-label classification accuracy (weighted F1 score) with various representations on the male patients. We keep the same random seeds across different classifiers, and for the supervised methods, we randomly split the data into train and test (80-20) five time, and calculate the mean and standard deviation of the accuracies. Our model significantly improves the classification for all tested classifiers, demonstrating better separability in space compared to raw data and state-of-the-art method (DeepPatient). Model Not-Transformed PCA ICA DeepPatient SPHR (Ours) KNNs 0.5852 ± 0.005 0.5820 ± 0.005 0.5734 ± 0.003 0.5834 ± 0.002 0.7819 ± 0.001 LDA 0.6011 ± 0.004 0.5953 ± 0.003 0.5952 ± 0.002 0.6080 ± 0.003 0.7865 ± 0.002 NN for EHR 0.5926 ± 0.004 0.5925 ± 0.004 0.5838 ± 0.003 0.5918 ± 0.001 0.7884 ± 0.005 XGBoost 0.5439 ± 0.005 0.5583 ± 0.004 0.5587 ± 0.005 0.5896 ± 0.003 0.7845 ± 0.003 APPENDIX K ADDITIONAL DETAILS ON MODEL ACRCHITECTURES K.1 SPHR’S NEURAL NETWORK Figure A5: Architecture of SPHR. Our neural network is composed of three hidden layers, probablistic dropouts (p = 0.1) and nonlinear activations (PReLU) in between. In the figure above, b, n denote to the number of patients and features, respectively, with d being the output dimension (in our case d = 32). For readability and reproducibility purposes, we also include a Pytorch snippet of the network used for learning representations from the UK Biobank: 1 import torch.nn as nn 2 3 class SPHR(nn.Module): 4 def __init__(self, input_dim:int = 64, output_dim:int= 32): 5 self.inp_dim = input_dim 6 self.out_dim = output_dim 7 super().__init__() Table A6: Comparison of multi-label classification accuracy (micro F1 score) with various representations on the male patients. We keep the same random seeds across different classifiers, and for the supervised methods, we randomly split the data into train and test (80-20) five time, and calculate the mean and standard deviation of the accuracies. Our model significantly improves the classification for all tested classifiers, demonstrating better separability in space compared to raw data and state-of-the-art method (DeepPatient). Model Not-Transformed PCA ICA DeepPatient SPHR (Ours) KNNs 0.6364 ± 0.004 0.6355 ± 0.004 0.6358 ± 0.004 0.6358 ± 0.001 0.7921 ± 0.001 LDA 0.6405 ± 0.003 0.6393 ± 0.002 0.6392 ± 0.003 0.6383 ± 0.003 0.7811 ± 0.002 NN for EHR 0.6345 ± 0.003 0.6342 ± 0.003 0.6342 ± 0.004 0.6438 ± 0.001 0.7930 ± 0.003 XGBoost 0.6409 ± 0.003 0.6380 ± 0.003 0.6384 ± 0.003 0.6403 ± 0.002 0.7940 ± 0.003 Table A7: The percentage of apparently-healthy male patients who develop conditions in the next immediate visit within each predicted risk group. Among all methods (top three shown), SPHR-predicted Normal and High risk patients developed the fewest and most conditions, respectively, as expected. P0 (Not-Transformed) DeepPatient SPHR (Ours) Future Diagnosis Normal LR HR Normal LR Higher Risk Normal LR HR Cancer 2.76% 0.62% 2.75% 2.86% 0.33% <0.1% 1.52% 2.96% 4.14% Diabetes 1.88% 1.94% 1.63% 0.85% 0.73% 1.62% 0.73% 0.55% 5.29% Other Serious Cond 9.57% 6.44% 9.47% 9.33% 4.41% 7.28% 2.73% 8.60% 12.45% 8 self.nonlinear_net = nn.Sequential( 9 nn.Linear(self.inp_dim,512), 10 nn.Dropout(p=0.1), 11 nn.PReLU(), 12 nn.Linear(512, 256), 13 nn.Dropout(p=0.1), 14 nn.PReLU(), 15 nn.Linear(256,self.out_dim), 16 nn.PReLU() 17 ) 18 19 def forward_oneSample(self, input_tensor): 20 # useful for the forward method call and for inference 21 return self.nonlinear_net(input_tensor) 22 23 def forward(self, positive, anchor, negative): 24 ## forward method for training 25 return self.forward_oneSample(positive), self.forward_oneSample( anchor), self.forward_oneSample(negative) Listing 2: SPHR’s network architecture We train SPHR by minimizing our proposed NPLB objective, Eq.(3), using the Adam optimizer for 1000 epochs with lr = 0.001, and employ an exponential learning rate decay (γ = 0.95) to decrease the learning rate after every 50 epochs. We set the margin hyperparameter to be ◦ = 1. In all experiments, the triplet selection was done in an offline manner using the most common triplet selection scheme (e.g. see https://www.kaggle.com/code/hirotaka0122/tripletloss-with-pytorch?scriptVersionId=26699660&cellId=6). K.2 CIFAR-10 EMBEDDING NETWORK AND EXPERIMENTAL SETUP To further demonstrate the improvements of NPLB on representation learning, we benchmarked various triplet losses on CIFAR10 as well. For this experiment, we trained a randomly-initialized VGG13 (Simonyan & Zisserman, 2015) model (not pre-trained) on CIFAR10 to produce embeddings m ∈ R128 for 200 epochs, using the Adam optimizer with lr = 0.001 and a decaying schedule (similar to SPHR’s optimization setting, as described in the main manuscript). We note that the architecture used in this experiment is identical to a traditional classification VGG model, with the difference being in the "classification" layer which is re-purposed for producing 128-dimensional embeddings. CIFAR-10 images were normalized using the standard CIFAR-10 transformation, and were not agumented during trainnig (i.e. we did not use any augmentations in training the model). Table A8: Studying the effects of different augmentation levels on classification as a proxy for all downstream tasks. We followed the same procedure as all other classification experiments (including model parameters). The results for 1× augmentation are omitted since the results are very similar to no augmentation. No Augmentation 3× 5× 10× Females: Multi-Label 0.5730 ± 0.002 0.6642 ± 0.002 0.6619 ± 0.003 0.6584 ± 0.003 Males: Multi-Label 0.6247 ± 0.005 0.7845 ± 0.003 0.7852 ± 0.004 0.7685 ± 0.002 K.3 MNIST EMBEDDING NETWORK For ease of readability and reproducibility, we provide the architecture used for MNIST as a Pytorch snippet: 1 import torch.nn as nn 2 3 class MNIST_Network(nn.Module): 4 def __init__(self, embedding_dimension=2): 5 super().__init__() 6 self.conv_net = nn.Sequential( 7 nn.Conv2d(1, 32, 5), 8 nn.PReLU(), 9 nn.MaxPool2d(2, stride=2), 10 nn.Dropout(0.3), 11 nn.Conv2d(32, 64, 5), 12 nn.PReLU(), 13 nn.MaxPool2d(2, stride=2), 14 nn.Dropout(0.3) 15 ) 16 17 self.feedForward_net = nn.Sequential( 18 nn.Linear(64*4*4, 512), 19 nn.PReLU(), 20 nn.Linear(512, embedding_dimension) 21 ) 22 23 def forward(self, input_tensor): 24 conv_output = self.conv_net(input_tensor) 25 conv_output = conv_output.view(-1, 64*4*4) 26 return self.feedForward_net(conv_output) Listing 3: Network architecture used for validation on MNIST. We train the network for 50 epochs using the Adam optimizer with lr = 0.001. We set the margin hyperparameter to be ◦ = 1. K.4 FASHION MNIST EMBEDDING NETWORK For readability and reproducibility purposes, we provide the architecture used for Fashion MNIST as a Pytorch snippet: 1 import torch.nn as nn 2 3 class FMNIST_Network(nn.Module): 4 def __init__(self, embedding_dimension=128): 5 super().__init__() 6 self.conv_net = nn.Sequential( 7 nn.Conv2d(in_channels=1, out_channels=16, kernel_size=3), 8 nn.PReLU(), 9 nn.MaxPool2d(2, stride=2), 10 nn.Dropout(0.1), 11 nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5), 12 nn.PReLU(), 13 nn.MaxPool2d(2, stride=1), 14 nn.Dropout(0.2), 15 nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5), 16 nn.AvgPool2d(kernel_size=1), 17 nn.PReLU() 18 ) 19 20 self.feedForward_net = nn.Sequential( 21 nn.Linear(64*4*4, 512), 22 nn.PReLU(), 23 nn.Linear(512, embedding_dimension) 24 ) 25 26 def forward(self, input_tensor): 27 conv_output = self.conv_net(input_tensor) 28 conv_output = conv_output.view(-1, 64*4*4) 29 return self.feedForward_net(conv_output) Listing 4: Network architecture used for validation on Fashion MNIST. We train the network for 50 epochs using the Adam optimizer with lr = 0.001. We set the margin hyperparameter to be ◦ = 1. K.5 CLASSIFICATION MODELS The parameters for NN for EHR were chosen based on Chen et al. (2020a). The "main" parameters for KNN and XGBoost were chosen through a randomized grid search while parameters for LDA were unchanged. We specify the parameters that were identified through grid search in the KNN and XGBoost sections. K.5.1 NN FOR EHR We follow the work of Chen et al. (2020a) and construct a feed forward neural network with the additive attention mechanism in the first layer. As in Chen et al. , we choose the learning rate to be lr = 0.001 with an L2 penalty coefficient λ = 0.001, and train the model for 100 epochs. K.5.2 KNNS We utilized the Scikit-Learn implementation of K-Nearest Neighbors. The optimal number of neighbors were found with grid search from 10 to 100 neighbors (increasing by 10). For the sake of reproducibility, we provide the parameters with scikit-learn terminology. For more information about the meaning of each parameter (and value), we refer the reviewers to the online documentation: https://scikit-learn.org/stable/modules/generated/sklearn.neighbo rs.KNeighborsClassifier.html. • Algorithm: Auto • Leaf Size: 30 • Metric: Minkowski • Metric Params: None • n Jobs: -1 • n Neighbors: 50 • p: 2 • Weights’: Uniform K.5.3 LDA We employed the Scikit-Learn implementation of Linear Discriminant Analysis (LDA), with the default parameters. K.5.4 XGBOOST We utilized the official implementation of XGBoost, located at: https://xgboost.readthed ocs.io/en/stable/. We optimized model performance through grid search for learning rate (0.01 to 0.2, increasing by 0.01), max depth (from 1 to 10, increasing by 1) and number of estimators (from 10 to 200, increasing by 10). For the sake of reproducibility, we provide the parameters the nomenclature used in the online documentation. • Objective: Binary-Logistic • Use Label Encoder: False • Base Score: 0.5 • Booster: gbtree • Callbacks: None, • colsample_by_level:1 • colsample_by_node:1 • colsample_by_tree":1 • Early Stopping Rounds: None • Enable Categorical: False • Evaluation metric: None • γ (gamma): 0 • GPU ID: -1 • Grow Policy: depthwise’ • Importance Type: None • Interaction Constraints: " " • Learning Rate: 0.05 • Max Bin: 256 • Max Categorical to Onehot: 4 • Max Delta Step: 0 • Max Depth: 4 • Max Leaves: 0 • Minimum Child Weight: 1 • Missing: NaN • Monotone Constraints: ’()’ • n Estimators: 50 • n Jobs: -1 • Number of Parallel Trees: 1 • Predictor: Auto • Random State: 0 • reg_alpha:0 • reg_lambda:1 • Sampling Method: Uniform • scale_pos_weight:1 • Subsample: 1 • Tree Method: Exact • Validate Parameters: 1 • Verbosity: None Algorithm 1 Proposed Augmentation of Electronic Health Records Data. The proposed strategy will ensure that each augmented feature falls between pre-determined ranges for the appropriate gender and age group, which are crucial in diagnosing conditions. Require: Xdict: A mapping between gender/age condition groups to raw bloodwork and lifestyle matricies Require: condlist: A list of all present conditions # e.g. bona fide healthy, diabetic, etc. Require: U : A matrix storing upper bounds for featurej given conditioni Require: L: A matrix of the lower bounds for featurej given conditioni 1: X̃dict ← Zeros(Xdict) 2: for conditioni in condlist, do 3: for featurej in Xdict[conditioni], do 4: µ←Mean(featurej) 5: σ ← STD(featurej) # Standard deviation 6: z ← -1016 # initialize 7: while z 6∈ [Lij , Uij ], do 8: z←∼ N (µ, σ) # sampled value from the Gaussian distribution 9: end while 10: X̃dict[conditioni][featurej ]← z # augmented feature 11: end for 12: end for APPENDIX L DATA AUGMENTATION SCHEME APPENDIX M : COMPLETE LIST OF FEATURES M.1 UKB FID TO NAME MAPPINGS FOR FEMALE PATIENTS Lab Metrics 21003: Age 30160: Basophill count 30220: Basophill percentage 30150: Eosinophill count 30210: Eosinophill percentage 30030: Haematocrit percentage 30020: Haemoglobin concentration 30300: High light scatter reticulocyte count 30290: High light scatter reticulocyte percentage 30280: Immature reticulocyte fraction 30120: Lymphocyte count 30180: Lymphocyte percentage 30050: Mean corpuscular haemoglobin 30060: Mean corpuscular haemoglobin concentration 30040: Mean corpuscular volume 30100: Mean platelet (thrombocyte) volume 30260: Mean reticulocyte volume 30270: Mean sphered cell volume 30080: Platelet count 30110: Platelet distribution width 30010: Red blood cell (erythrocyte) count 30070: Red blood cell (erythrocyte) distribution width 30250: Reticulocyte count 30240: Reticulocyte percentage 30000: White blood cell (leukocyte) count 30620: Alanine aminotransferase 30600: Albumin 30610: Alkaline phosphatase 30630: Apolipoprotein A 30640: Apolipoprotein B 30650: Aspartate aminotransferase 307
1. What is the focus and contribution of the paper on triplet objective function formulation? 2. What are the strengths and weaknesses of the proposed approach, particularly in its empirical analysis? 3. How stable is the model, and how does it scale? 4. How does the proposed approach work for EHR data in an unsupervised setting? 5. What are some concerns regarding the choice of baselines and the methodology used in the evaluation?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors propose a novel formulation of the triplet objective function by explicitly regularizing the distance between the positive and negative samples in a triplet. They evaluate their approach on (Fahsion) MNIST and EHR data. Strengths And Weaknesses While the approach is interesting, I have major concerns regarding the empirical analysis. Clarity, Quality, Novelty And Reproducibility How stable is the model? In table 5, it would be more meaningful to also re-train the embeddings. It is unclear whether the very small improvements in a toy-like dataset such as MNIST are meaningful. It would be more insightful to see results on a slightly more complex dataset such as CIFAR-10 or CIFAR-100. How does the proposed approach scale? How does the proposed approach work for EHR data in an unsupervised setting (online generation of triplets via negative sampling)? I don't really see the practical relevance of the supervised setting here and importantly comparisons to unsupervised methods PCA and ICA are not meaningful. PCA and ICA are very poor baselines as dimensionality reduction methods. It would be more meaningful to see results based on eg MDS, a VAE, kernel PCA/GP-LVM, diffusion maps,... In Fig 3, how was the UMAP for non-transformed data computed? Based on the PCA representation?
ICLR
Title No Pairs Left Behind: Improving Metric Learning with Regularized Triplet Objective Abstract We propose a novel formulation of the triplet objective function that improves metric learning without additional sample mining or overhead costs. Our approach aims to explicitly regularize the distance between the positive and negative samples in a triplet with respect to the anchor-negative distance. As an initial validation, we show that our method (called No Pairs Left Behind [NPLB]) improves upon the traditional and current state-of-the-art triplet objective formulations on standard benchmark datasets. To show the effectiveness and potentials of NPLB on real-world complex data, we evaluate our approach on a large-scale healthcare dataset (UK Biobank), demonstrating that the embeddings learned by our model significantly outperform all other current representations on tested downstream tasks. Additionally, we provide a new model-agnostic single-time health risk definition that, when used in tandem with the learned representations, achieves the most accurate prediction of subjects’ future health complications. Our results indicate that NPLB is a simple, yet effective framework for improving existing deep metric learning models, showcasing the potential implications of metric learning in more complex applications, especially in the biological and healthcare domains. Our code package as well as tutorial notebooks is available on our public repository: <revealed after the double blind reviews>. 1 INTRODUCTION Metric learning is the task of encoding similarity-based embeddings where similar samples are mapped closer in space and dissimilar ones afar (Xing et al., 2002; Wang et al., 2019; Roth et al., 2020). Deep metric learning (DML) has shown success in many domains, including computer vision (Hermans et al., 2017; Vinyals et al., 2016; Wang et al., 2018b) and natural language processing (Reimers & Gurevych, 2019; Mueller & Thyagarajan, 2016; Benajiba et al., 2019). Many DML models utilize paired samples to learn useful embeddings based on distance comparisons. The most common architectures among these techniques are the Siamese (Bromley et al., 1993) and triplet networks (Hoffer & Ailon, 2015). The main components of these models are the: (1) Strategies for constructing training tuples and (2) objectives that the model must minimize. Though many studies have focused on improving sampling strategies (Wu et al., 2017; Ge, 2018; Shrivastava et al., 2016; Kalantidis et al., 2020; Zhu et al., 2021), modifying the objective function has attracted less attention. Given that learning representations with triplets very often yield better results than pairs using the same network (Hoffer & Ailon, 2015; Balntas et al., 2016), our work focuses on improving triplet-based DML through a simple yet effective modification of the traditional objective. Modifying DML loss functions often requires mining additional samples or identifying new quantities (e.g. identifying class centers iteratively throughout training (He et al., 2018)) or computing quantities with costly overheads (Balntas et al., 2016), which may limit their applications. In this work, we aim to provide an easy and intuitive modification of the traditional triplet loss that is motivated by metric learning on more complex datasets, and the notion of density and uniformity of each class. Our proposed variation of the triplet loss leverages all pairwise distances between existing pairs in traditional triplets (positive, negative, and anchor) to encourage denser clusters and better separability between classes. This allows for improving already existing triplet-based DML architectures using implementations in standard deep learning (DL) libraries (e.g. TensorFlow), enabling a wider usage of the methods and improvements presented in this work. Many ML algorithms are developed for and tested on datasets such as MNIST (LeCun, 1998) or ImageNet (Deng et al., 2009), which often lack the intricacies and nuances of data in other fields, such as health-related domains (Lee & Yoon, 2017). Unfortunately, this can have direct consequences when we try to understand how ML can help improve care for patients (e.g. diagnosis or prognosis). In this work, we demonstrate that DML algorithms can be effective in learning embeddings from complex healthcare datasets. We provide a novel DML objective function and show that our model’s learned embeddings improve downstream tasks, such as classifying subjects and predicting future health risk using a single-time point. More specifically, we build upon the DML-learned embeddings to formulate a new mathematical definition for patient health-risks using a single time point which, to the best of our knowledge, does not currently exist. To show the effectiveness of our model and health risk definition, we evaluate our methodology on a large-scale complex public dataset, the UK Biobank (UKB) (Bycroft et al., 2018), demonstrating the implications of our work for both healthcare and the ML community. In summary, our most important contributions can be described as follows. 1) We present a novel triplet objective function that improves model learning without any additional sample mining or overhead computational costs. 2) We demonstrate the effectiveness of our approach on a large-scale complex public dataset (UK Biobank) and on conventional benchmarking datasets (MNIST, Fashion MNIST (Xiao et al., 2017) and CIFAR10 (Krizhevsky, 2010)). This demonstrates the potential of DML in other domains which traditionally may have been less considered. 3) We provide a novel definition of patient health risk from a single time point, demonstrating the real-world impact of our approach by predicting current healthy subjects’ future risks using only a single lab visit, a challenging but crucial task in healthcare. 2 BACKGROUND AND RELATED WORK Contrastive learning aims to minimize the distance between two samples if they belong to the same class (are similar). As a result, contrastive models require two samples to be inputted before calculating the loss and updating their parameters. This can be thought of as passing two samples to two parallel models with tied weights, hence being called Siamese or Twin networks (Bromley et al., 1993). Triplet networks (Hoffer & Ailon, 2015) build upon this idea to rank positive and negative samples based on an anchor value, thus requiring the model to produce mappings for all three before the optimization step (hence being called triplets). Modification of Triplet Loss: Due to their success and importance, triplet networks have attracted increasing attention in recent years. Though the majority of proposed improvements focus on the sampling and selection of the triplets, some studies (Balntas et al., 2016; Zhao et al., 2019; Kim & Park, 2021; Nguyen et al., 2022) have proposed modifications of the traditional triplet loss proposed in Hoffer & Ailon (2015). Similar to our work, Multi-level Distance Regularization (MDR) (Kim & Park, 2021) seeks to regularize the DML loss function. MDR regularizes the pairwise distances between embedding vectors into multiple levels based on their similarity. The goal of MDR is to disturb the optimization of the pairwise distances among examples and to discourage positive pairs from getting too close and the negative pairs from being too distant. A drawback of regularization methods is the choice of hyperparameter that balances the regularization term, though adaptive balancing methods could be used (Chen et al., 2018; Heydari et al., 2019). Most related to our work, Balntas et al. (2016) modified the traditional objective by explicitly accounting for the distance between the positive and negative pairs (which the traditional triplet function does not consider), and applied their model to learn local feature descriptors using shallow convolutional neural networks. They introduce the idea of "in-triplet hard negative", referring to the swap of the anchor and positive sample if the positive sample is closer to the negative sample than the anchor, thus improving on the performance of traditional triplet networks (we refer to this approach as Distance Swap). Though this method uses the distance between the positive and negative samples to choose the anchor, it does not explicitly enforce the model to regularize the distance between the two, which was the main issue with the original formulation. Our work addresses this pitfall by using the notion of local density and uniformity (defined later in §3) to explicitly enforce the regularization of the distance between the positive and negative pairs using the distance between the anchors and the negatives. As a result, our approach ensures better inter-class separability while encouraging denser intra-class embeddings. In addition to MDR and Swap Distance, we benchmark our approach againt three related and widely-used metric learning algorithms, namely LiftedStruct Song et al. (2015), N-Pair Loss Sohn (2016), and InfoNCE Oord et al. (2018a). Due to the space constraints, and given the popularity of the methods, we provide an overview of these algorithms in Appendix E Deep Learned Embeddings for Healthcare: Recent years have seen an increase in the number of DL models for Electronic Health Records (EHR) with several methods aiming to produce rich embeddings to better represent patients (Rajkomar et al., 2018; Choi et al., 2016b; Tran et al., 2015; Nguyen et al., 2017; Choi et al., 2016a; Pham et al., 2017). Though most studies in this area consider temporal components, DeepPatient (Miotto et al., 2016) does not explicitly account for time, making it an appropriate model for comparison with our representation learning approach given our goal of predicting patients’ health risks using a single snapshot. DeepPatient is an unsupervised DL model that seeks to learn general deep representations by employing three stacks of denoising autoencoders that learn hierarchical regularities and dependencies through reconstructing a masked input of EHR features. We hypothesize that learning patient reconstructions alone (even with masking features) does not help to discriminate against patients based on their similarities. We aim to address this by employing a deep metric learning approach that learns similarity-based embeddings. Predicting Patient’s Future Health Risks: Assessing patients’ health risk using EHR remains a crucial, yet challenging task of epidemiology and public health (Li et al., 2015). An example of such challenges are the clinically-silent conditions, where patients fall within "normal" or "borderline" ranges for specific known blood work markers, while being at the risk of developing chronic conditions and co-morbidities that will reduce quality of life and cause mortality later on (Li et al., 2015). Therefore, early and accurate assessment of health risk can tremendously improve the patient care, specially in those who may appear "healthy" and do not show severe symptoms. Current approaches for assessing future health complications tie the definition of health risks to multiple time points (Hirooka et al., 2021; Chowdhury & Tomal, 2022; Razavian et al., 2016; Kamal et al., 2020; Cohen et al., 2021; Che et al., 2017). Despite the obvious appeal of such approaches, the use of many visits for modeling and defining risk simply ignores a large portion of patients who do not return for subsequent check ups, especially those with lower incomes and those without adequate access to healthcare (Kullgren et al., 2010; Taani et al., 2020; Nishi et al., 2019). Given the importance of addressing these issues, we propose a mathematical definition (that is built upon DML) based on a single time point, which can be used to predict patient health risk from a single lab visit. 3 METHODS Main Idea of No Pairs Left Behind (NPLB): The main idea behind our approach is to ensure that, during optimization, the distance between positive pi and negative samples ni is considered, and regularized with respect to the anchors ai (i.e. explicitly introducing a notion of distance between d(pi, ni) which depends on d(ai, ni)). We visualize this idea in Fig. 1. The mathematical intuition behind our approach can be described by considering in-class local density and uniformity, as introduced in Rojas-Thomas & Santos (2021) for unsupervised clustering evaluation metric. Given a metric learning model φ, let local density of a sample pi be defined as LD(pi)pi∈ck = min{d(φ(pi), φ(pj))}, ∀pi ∈ ck and i 6= j, and let AD(ck) be the average local density of all point in class ck. An ideal operator φ would produce embeddings that are compact while well separated from other classes, or that the in-class embeddings are uniform. This notion of uniformity, is proportional to the difference between the local and average density of each class, i.e. Unif(ck) = {∑|ck| i |LD(pi)−AD(ck)| AD(ck)+ξ if |ck| > 1 0 Otherwise . for 0 < ξ 1. However, computing density and uniformity of classes is only possible post-hoc once all labels are present and not feasible during training if the triplets are mined in a self-supervised manner. To reduce the complexity and allow for general use, we utilize proxies for the mentioned quantities to regularize the triplet objective using the notion of uniformity. We take the distance between positive and negative pairs as inversely proportional to the local density of a class. Similarly, the distance between anchors and negative pairs is closely related to the average density, given that a triplet model maps positive pairs inside an ◦-ball of the anchor ( ◦ being the margin). In this sense, the uniformity of a class is inversely proportional to |d(φ(pi), φ(ni))−d(φ(ai), φ(ni))|. NPLB Objective: Let φ(·) denote an operator and T be the set of triplets of the form (pi, ai, ni) (positive, anchor and negative tensors) sampled from a mini-batch B with size N . For the ease of notation, we will write φ(qi) as φq. Given a margin ◦ (a hyperparameter), the traditional objective function for a triplet network is shown in Eq. (1): LTriplet = 1 N N∑ (pi,ai,ni)∈T [d(φa, φp)− d(φa, φn) + ◦]+ (1) with [·]+ = max{·, 0} and d(·) being the Euclidean distance. Minimizing Eq. (1) only ensures that the negative pairs fall outside of an ◦-ball around the ai, while bringing the positive sample pi inside of this ball (illustrated in Fig. 1), satisfying d(φa, φn) > d(φa, φp) + ◦. However, this objective does not explicitly account for the distance between positive and negative samples, which can impede performance especially when there exists high in-class variability. Motivated by our main idea of having denser and more uniform in-class embeddings, we add a simple regularization term to address the issues described above, as shown in Eq. (2) LNPLB = 1 N N∑ (pi,ai,ni)∈T [d(φa, φp)− d(φa, φn) + ◦]+ + [d(φp, φn)− d(φa, φn)]p , (2) where p ∈ N and NPLB refers to "No Pairs Left Behind." The regularization term in Eq. (2) enforces positive and negative samples to be roughly the same distance away as all other negative pairings, while still minimizing their distance to the anchor values. However, if not careful, this approach could result in the model learning to map ni such that d(φa, φp) > max{ ◦, d(φp, φn)}, which would ignore the triplet term, resulting in a minimization problem with no lower bound1. To avert such issues, we restrict p = 2 (or generally, p ≡ 0 (mod 2)) as in Eq. (3). LNPLB = 1 N N∑ (pi,ai,ni)∈T [d(φa, φp)− d(φa, φn) + ◦]+ + [d(φp, φn)− d(φa, φn)]2 , (3) Note that this formulation does not require mining of any additional samples nor complex computations since it just uses the existing samples in order to regularize the embedded space. Moreover LNPLB = 0 =⇒ − [d(φp, φn)− d(φa, φn)]2 = [d(φa, φp)− d(φa, φn) + ◦]+ which, considering only the real domain, is possible if and only if d(φp, φn) = d(φa, φn), and d(φa, φn) ≥ d(φa, φp) + ◦, explicitly enforcing separation between negative and positive pairs. 1The mentioned pitfall can be realized by taking p = 1, i.e. L(pi, ai, ni) = 1 N N∑ (pi,ai,ni)∈T [d(φa, φp)− d(φa, φn)) + ◦]+ + [d(φp, φn)− d(φa, φn)]. In this case, the model can learn to map ni and ai such that d(φa, φn) > C where C = max{d(φp, φn), d(φa, φp) +m}, resulting in L < 0. 4 VALIDATION OF NPLB ON STANDARD DATASETS Prior to testing our methodology on healthcare data, we validate our derivations and intuition on common benchmark datasets, namely MNIST, Fashion MNIST and CIFAR10. To assess the improvement gains from the proposed objective, we refrained from using more advanced triplet construction techniques and followed the most common approach of constructing triplets using the labels offline. We utilized the same architecture and training settings for all experiments, with the only difference per dataset being the objective functions (see Appendix K for details on each architecture). After training, we evaluated our approach quantitatively through assessing classification accuracy of embeddings produced by optimizing the traditional triplet, Swap Distance and our proposed NPLB objective. The results for each data are presented in Table 1, showing that our approach improves classification. We also assessed the embeddings qualitatively: Given the simplicity of MNIST, we designed our model to produce two-dimensional embeddings which we directly visualized. For Fashion MNIST and CIFAR, we generated embeddings in R64 and R128, respectively, and used Uniform Manifold Approximation and Projection (UMAP) (McInnes et al., 2018) to reduce the dimensions for visualizations, as shown in Fig. 2. Our results show that networks trained on NPLB objective produce embeddings that are denser and well separated in space, as desired. 5 IMPROVING PATIENT REPRESENTATION LEARNING In this section, we aim to demonstrate the potential and implications of our approach on a more complex dataset in three steps: First, we show that deep metric learning improves upon current state-of-the-art patient embedding models (§5.1). Next, we provide a comparison between NPLB, Distance Swap and the traditional triplet loss formulations (§5.2). Lastly, we apply our methodology to predict health risks of currently healthy subjects from a single time point (§5.3). We focus on presenting results for the female subjects due to space limitations. We note that results on male subjects are very similar to the female population, as presented in Appendix I. 5.1 DEEP METRIC LEARNING FOR BETTER PATIENT EMBEDDINGS Healthcare datasets are considerably different than those in other domains. Given the restrictions on sharing health-related data (as stipulated by laws such as those defined under the Health Insurance Portability and Accountability Act - HIPAA), most DL-based models are developed and tested on proprietary in-house datasets, making comparisons and benchmarking a major hurdle (Evans, 2016). This is in contrast to other areas of ML which have established standard datasets (e.g. ImageNet or GLUE (Wang et al., 2018a)). To show the feasibility of our approach, we present the effectiveness of our methodology on the United Kingdom Biobank (UKB) (Bycroft et al., 2018): a large-scale (∼ 500K subjects) complex public dataset, showing the potential of UKB as an additional benchmarking that can be used for developing and testing future DL models in the healthcare domain. UKB contains deep genetic and phenotypic data from approximately 500,000 individuals aged between 39-69 across the United Kingdom, collected over many years. We considered patients’ lab tests and their approximated activity levels (e.g. moderate or vigorous activity per week) as predictors (complete list of features used is shown in Appendix M), and their doctor-confirmed conditions and medication history for determining labels. Specifically, we labeled a patient as "unhealthy" if they have confirmed conditions or take medication for treating a condition, and otherwise labeled them as "apparently-healthy". We provide a step-by-step description of our data processing in Appendix F. A close analysis of the UKB data revealed large in-class variability of test ranges, even among those with no current or prior confirmed conditions (the "apparently-healthy" subjects). Moreover, the overall distribution of key metrics are very similar between the unhealthy and apparently-healthy patients (visualized in Appendix G). As a result, we hypothesized that there exists a continuum among patients’ health states, leading to our idea that a similarity-based learned embedding can represent subjects better than other representations for downstream tasks. This idea, in tandem with our assumption of intricate nonlinear relationships among features, naturally motivated our approach of deep metric learning: our goal is to train a model that learns a metric for separating patients in space, based on their similarities and current confirmed conditions (labels). Due to our assumptions, we used the apparently-healthy patients initially as anchor points between the two ends of the continuum (the very unhealthy and healthy). However, this formulation necessitates identifying a more "reliable" healthy group, often referred to as the bona fide healthy (BFH) group (Cohen et al., 2021)2. To find the BFH population, we considered all patients whose key lab tests for common conditions fall within the clinically-normal values. These markers are: Total Cholesterol, HDL Cholesterol, LDL Cholesterol, Triglycerides, Fasting Glucose, HbA1c, C-Reactive Protein.; we refer to this set of metrics as the P0 metrics and provide the traditional "normal" clinical ranges in Appendix H. It is important to note that the count of the BFH population is much smaller than the apparently-healthy group (∼ 6% and ∼ 5% of female and male populations, respectively). To address this issue and to keep DML as the main focus, we implemented a simple yet intuitive rejection-based sampling to generate synthetic BFH patients, though more sophisticated methods could be employed in future work. Similar to any other rejection-based sampling and given that lab results often follow a Gaussian distribution (Whyte & Kelly, 2018), we assumed that each feature follows a distribution N (µx, σx) where µx and σx denote the empirical mean and standard deviation of feature xi for all patients. Since BFH patients are selected if their P0 biosignals fall within the clinically-normal lab ranges, we used the bounds of the clinically normal range as the accept/reject criteria. Our simple rejection-based sampling scheme is presented in Appendix L. Training Procedure and Model Architecture: Before training, we split the data 70% : 30% for training and testing. In the training partition, we augment the bona fide population 3 folds and then generate 100K triplets of the form (ai, pi, ni) randomly in an offline manner. We chose to generate 2Although it is possible to further divide each group (e.g. based on conditions), we chose to keep the patients in three very general groups to show the feasibility of our approach in various health-related domains. only 100K triplets in order to reduce training time and demonstrate the capabilities of our approach for smaller datasets. Note that in an unsupervised setting, these triplets need to be generated online via negative sample mining, but this is out of scope for this work given that we have labels a priori . Our model consists of the three hidden layers with two probabilistic dropout layers in between and Parametric Rectified Linear Units (PReLU) (He et al., 2015) nonlinear activations. We present a visual representation of our architecture and the dimensions in Fig. A5. We optimize the weights for minimizing our proposed NPLB objective, Eq.(3), using Adam (Kingma & Ba, 2014) for 1000 epochs with lr = 0.001, and employ an exponential learning rate decay (γ = 0.95) to decrease the learning rate after every 50 epochs, and set the triplet margin to ◦ = 1. For simplicity, we will refer to our model as SPHR (Similarity-based Patient Risk modeling). In order to test the true capability of our model, all evaluations are performed on the non-augmented data. Results: Similar to §4, we evaluated our deep-learned embeddings on its improvements for binary (unhealthy or apparently-healthy) and multi-class (unhealthy, apparently-healthy or bona fide healthy) classification tasks. The idea here is that if SPHR has learned to separate patients based on their conditions and similarities, then training classifiers on SPHR-produced embeddings should show improvements compared to raw data. We trained five classifiers (k-nearest neighbors (KNN), Linear Discriminate Analysis (LDA), a neural network (NN) for EHR (Chen et al. (2020a)), and XGBoost Chen & Guestrin (2016)) on raw data (not-transformed) and other common transformations. These transformations include a linear transformation (Principal Component analysis [PCA]), a non-linear transformation (Diffusion Maps (Coifman & Lafon (2006) [DiffMap]) and the current state-of-the-art nonlinear transformation, DeepPatient. DeepPatient, PCA, DiffMap, and SPHR are Rn → Rd, where n, d ∈ N denote the initial number of features and a (reduced) mapping space, respectively, with d = 32 (in order to have the same dimensionality as SPHR), though various choices of d yielded similar results. We present these results in Table 2 comparing classification weighted F1 score for models trained on raw EHR, linear and nonlinear transformations. We also evaluate the separability qualitatively using UMAP, as shown in Fig. A1. In all tested cases, our model significantly outperforms all other transformations, demonstrating the effectiveness of DML in better representing patients from EHR. 5.2 NPLB SIGNIFICANTLY IMPROVES LEARNING ON COMPLEX DATA One of our main motivations for modifying the triplet object was to improve model performance on more complex datasets with larger in-class variability. To evaluate the improvements provided by our NPLB objective, we perform the same analysis as in §4, but this time on the UKB data. Table 3 demonstrate the significant improvement made by our simple modification to the traditional triplet loss, further validating our approach and formulation experimentally. 5.3 PREDICTING HEALTH RISKS FROM A SINGLE LAB VISIT Definition of Single-Time Health Risk: Predicting patients’ future health risks is a challenging task, especially when using only a single lab visit. As described in §2, all current models use multiple assessments for predicting health risks of a patient; however, these approaches ignore a large portion of the population who do not return for additional check-ups. Motivated by the definition of risk in other fields (e.g. risk of re-identification of anonymized data (Ito & Kikuchi, 2018)), we provide a simple and intuitive distance-based definition of health risk to address the mentioned issues and that is well suited for DML embeddings. Given the simplicity of our definition and due to space constraints, we describe the definition below and outline the mathematical framework in Appendix B. We define the health distance as the euclidean distance between a subject and the reference bona fide healthy (BFH) subject. Many studies have shown the large discrepancy of lab metrics among different age groups and genders (Cohen et al., 2021). To account for these known differences, we identify a reference vector which is the median BFH subject from each age group per gender. Moreover, for simplicity and interpretability, we define health risk as discrete groups using the known BFH population: For each stratification g (age and sex), we identify two BFH subjects who are at the 2.5 and 97.5 percentiles (giving us the inner 95% of the distribution), and calculate their distance to the corresponding reference vector, This gives us a distance interval [tg2.5, t g 97.5]. In a group g, any new patient whose distance to the reference vector falls inside the corresponding [tg2.5, t g 97.5] is considered to be "Normal". Similarly, we identify the [tg1, t g 99] intervals (corresponding to the inner 98% of BFH group); any new patient whose health distance is within [tg1, t g 99] but not in [t g 2.5, t g 97.5] is considered to be in the "Lower Risk" (LR) group. Lastly, any patient with health distance outside of these intervals is considered to have "Higher Risk" (HR). The UKB data is a good candidate for predicting potential health risks, given that it includes subsequent follow-ups where a subset of patients are invited for a repeat assessment. The first follow-up was done between 2012-2013 and it included approximately 20,000 individuals (Littlejohns et al., 2019) (25× reduction with many measurements missing). Based on our goal of predicting risk from a single visit, we only include the patients’ first visits for modeling, and use the 2012-2013 follow-ups for evaluating the predictions. Results: We utilized the single-time health risk definition to predict a patient’s future potentials of health complications. To demonstrate the versatility of our approach, we predicted general health risks that were used for all health conditions available (namely Cancer, Diabetes and Other Serious Conditions); however, we hypothesize that constraining health risks based on specific conditions will improve risk predictions. We considered five methods for assigning a risk group to each patient: (i) Euclidean distance on raw data (preprocessed but not transformed), (ii) Mahalanobis distance on pre-processed data, (iii) Euclidean distance on the key metrics (P0) for the available conditions(described in section 5.1), therefore hand-crafting features and reducing dimensionality in order to achieve an upper bound performance for most traditional methods (though this will not be possible for all diseases). The last two methods we consider are deep representation learning methods: (iv) DeepPatient and (v) SPHR embeddings (proposed model). We use the Euclidean metric for calculating the distance between deep-learned representations of (iv) and (v). For all approaches, we assigned all patients to one of the three risk groups using biosignals from a single visit, and calculated the percentage of patients who developed a condition in the immediate next visit. Intuitively, patients who fall under the "Normal" group should have fewer confirmed cases compared to subjects in "Lower Risk" or "Higher Risk" groups. Table 4 shows the results for the top three methods, with our approach consistently matching the intuitive criteria: among the five methods, SPHR-predicted patients in the Normal risk group have the fewest instance of developing future conditions, while the ones predicted as High risk have the highest instance of developing future conditions (Table 4). 6 CONCLUSION AND DISCUSSION We present a simple and intuitive variation of the traditional triplet loss (NPLB) which regularizes the distance between positive and negative samples based on the distance between anchor and negative pairs. To show the general applicability of our methods and as an initial validation step, we tested our model on three standard benchmarking datasets (MNIST, Fashion MNIST and CIFAR-10) and found that our NPLB-trained model produced better embeddings. To demonstrate the real world impact of DML, such as our proposed framework called SPHR, we applied our methodology on the UKB to classify patients and predict future health risk for the current healthy patients, using only a single time point. Motivated by risk prediction in other domains, we provide a distance-based definition of health risk. Utilizing this definition, we partitioned patients into three health risk groups (Normal, Lower Risk and Higher Risk). Among all methods, SPHR-predicted Higher Risk healthy patients had the highest percentage of actually developing conditions in the next visit, while SPHR-predicted Normal patients had the lowest instances, which is desired. Although the main point of our work focused on modifying the objective function, a limitation of our work is the simple triplet sampling that we employed, particularly when applied to healthcare. We anticipate additional improvement gains by employing online triplet sampling or extending our work to be self-supervised (Chen et al., 2020b; Oord et al., 2018b; Wang et al., 2021). The implications of our work are threefold: (1) Our proposed objective has the potential of improving existing triplet-based models without requiring additional sample mining or computationally intensive operations. We anticipate that combining our work with existing triplet sampling can further improve model learning and results. (2) Models for predicting patients’ health risks are nascent and often require time-series data. Our experiments demonstrated the potential improvements gained by developing DML-based models for learning patient embeddings, which in turn can improve patient care. Our results show that more general representation learning models are valuable in pre-processing EHR data and producing deep learned embeddings that can be used (or fine-tuned) for more specific downstream analyses. We believe additional analysis on the learned embedding space can prove to be useful for various tasks. For example, we show that there exists a relationship between distances in the embedded space and the time to develop a condition, which we present in Appendix C. The rapid growth of healthcare data, such as EHR, necessitates the use of large-scale algorithms that can analyze such data in a scalable manner (Evans, 2016). Currently, most applications of ML in healthcare are formulated for small-scale studies with proprietary data, or use the publicly-available MIMIC dataset (Miotto et al., 2016; Johnson et al., 2016), which is not as large-scale and complex as the UKB. Similar to our work, we believe that future DL models can benefit from using the UKB for development and benchmarking. (3) Evaluating health risk based on a single lab visit can enable clinicians to flag high-risk patients early on, potentially reducing the number (and the scope) of costly tests and significantly improving care for the most vulnerable individuals in a population. REPRODUCIBILITY Our code package and tutorial notebooks are all publicly available at on the authors’ Github at: <revealed after the double blind reviews>, and we will actively monitor the repository for any issues or improvements suggested by the users. Moreover, we have designed our Appendix to be a comprehensive guide for reproducing our results and experiments as well. Our Appendix includes all used features from the UK Biobank (for male and female patients), a complete list of model parameters (for classification models), detailed definition of single-time health risk as well as pseudocode and description of architectures we developed for our experiments. ACKNOWLEDGMENTS Will be added after double blind reviews. Appendix APPENDIX A RUNTIME AND SCALING ANALYSIS OF SPHR To measure the scalability of SPHR, we generated a random dataset consisting of 100,000 samples with 64 linearly-independent (also known as "informative") features and 10 classes, resulting in a matrix X ∈ R100000×64. From this dataset, we then randomly generated varying number of triplets following the strategy described in the main manuscript and in Hoffer & Ailon (2015), representing a computationally intensive case (as opposed to more intricate and faster mining schemes). The average of five mining runs is presented as "Avg. Mining Time" column in Table A1. Next, we trained SPHR on the varying number of triplets five times using (1) A Google Compute Engines with 48 logical cores (CPU) (2) A Google Virtual Machine equipped with one NVIDIA V100 GPU (referred to as GPU). The average training times for CPUs and GPUs are shown in Table A1. Table A1: Average training time of the proposed deep learning model (SPHR). All experiments are done under the same computational The times shown below are the average of 5 runs with all identical settings. Number of Triplets Avg. CPU Training Time Avg. GPU Training Time Avg. Mining Time 1,000 8.28 Mins 3.95 Mins <0.01 Mins 5,000 15.07 Mins 5.96 Mins 0.01 Mins 10,000 31.02 Mins 7.60 Mins 0.12 Mins 50,000 76.24 Mins 16.79 Mins 0.65 Mins 100,000 133.26 Mins 27.15 Mins 1.27 Mins 500,000 512.75 Mins 94.79 Mins 5.58 Mins APPENDIX B MATHEMATICAL DEFINITION OF SINGLE-TIME HEALTH RISK In this section, we aim to provide a mathematical definition of health risk that can measure similarity between a new cohort and an existing bona fide healthy population without requiring temporal data. This definition is inspired by the formulation of risk in other domains, such as defining the risk of re-identification of anonymized data (Ito & Kikuchi, 2018). We first define the notion of a "health distance", and use that to formulate thresholds intervals which enable us to define health risk as a mapping between continuous values and discrete risk groups. Definition Appendix B.1. Given a space X , a bona fide healthy population distribution B and a new patient p (all in X), the health score sn of p is: sq(p) = d(Pq(B), p) where Pq(H) denotes the qth percentile of B, and d(·) refers to a metric defined on X (potentially a pseudo metric). Definition Appendix B.1 provides a measure of distance between a new patient and an existing reference population, which can be used to define similarity. For example, if d(·) is the Euclidean distance, then the similarity between patient x and y is sim(x, y) = 11+d(x,y) . Next, using this notion of distance (and similarity), we define risk thresholds that allow for grouping of patients. Definition Appendix B.2. A threshold interval Iq = [tlq, tuq ] is defined as the distance between the vectors providing the inner q percent of a distribution H and the median value. Let n = (100− q)/2, then we have Iq as: Iq = [t l q, t u q ] = [d(Pn(H), P50(H)), d(Pq+n(H), P50(H))] . Additionally, for the sake of interpretability and convenience, we can define health risk groups using known Iq values which we define below. Definition Appendix B.3. Let M : Rd → V ∪ {η} where d denotes the number of features defining a patient, with η being discrete group and V denoting pre-defined risk groups based on k ∈ N many Figure A1: Qualitative assessment of our approach on UKB female patients. The NPLB-trained network better separates subjects, resulting in significant improvements in classification of patients, as shown in Table 3. Note the continuum of patients for both metric learning techniques, especially among the apparently-healthy patients (in yellow). intervals (i.e. |V | = |Iq| = k). Using the same notion of health distance sn and threshold interval Iq as before, we define a patient p’s health risk group as: M(p) = { Vq if sq(p) ∈ Iq η otherwise . We utilize our deep metric learning model and the definitions above in tandem to predict health risks. That is, we first produce embeddings for all patients using our learned nonlinear operator G, and then use the distance between the bona fide healthy (BFH) population and new patients to assign them a risk group. Mathematically, given the set of BFH population for all groups, i.e. BAll = {B[36,45], B[46,50], B[51,55], B[56,60], B[61,65], B[66,75]}, we take the reference value per age group to be r̃age = G(P50(Bage)), where Bage denotes the BFH population for the age group age. Then, using definition Appendix B.2, we define: Nage = I age 95 = [d(P2.5(Bage), P50(Bage)), d(P97.5(Bage), P50(Bage))] (A.4a) LRage = I age 98 = [d(P1(Bage), P50(Bage)), d(P99(Bage), P50(Bage))] . (A.4b) Note that Nage ⊂ LRage. Lastly, using the intervals defined in Eq. (A.4), we define the mapping M as shown in Eq. (A.5). Mage(p) = Normal if sn(G(p)) = d(G(p), r̃age) ∈ Nage Lower Risk if sn(G(p)) = d(G(p), r̃age) ∈ LRage\Nage Higher Risk otherwise . (A.5) APPENDIX C PREDICTING PATIENT’S HEALTH RISK IN TIME Given the performance of our DML model in classifying subjects and health risk assessment, we hypothesized that we can retrieve a relationship between spatial distance (in the embedded space) and a patient’s time to develop a condition using only a single lab visit. Let us assume that subjects start from a "healthy" point and move along a trajectory (among many trajectories) to ultimately become "unhealthy" (similar to the principle of entropy). In this setting, we hypothesized that our model maps patients in space based on their potential of moving along the trajectory of becoming unhealthy. To test our hypothesis, we designed an experiment to use our distance-based definition of health risk, and the NPLB embeddings to further stratify patients based on the immediacy of their health risk (time). More specifically, we investigated the correlation between spatial locations of the currently-healthy patients in the embedded space and the time to which they develop a condition. Similar to the health risk prediction experiment in the main manuscript, we computed the distance between each patient’s embedding and the corresponding reference value in the ultra healthy population (refer to the main manuscript for details on this procedure). We then extracted all healthy patients at the time of first visit who returned in (2012-2013) and 2014 (and after) for reassessment or imaging visits. It is important to again note that there is a significant drop in the number of returning patients for subsequent visits. Among the retrieved patients, we calculated the number of individuals who developed Cancer, Diabetes or Other Serious Conditions. We found strong negative correlations between the calculated health score (distances) and time of diagnosis for Other Serious Conditions (r = −0.72, p = 0.00042) and Diabetes (r = −0.64, p = 0.0071), with Cancer having the least correlation r = −0.20 and p = 0.025. Note that the lab tests used as predictors are associated with diagnosing metabolic health conditions and less associated with diagnosing cancer, which could explain the low correlation between health score and time of developing cancer. These results indicate that the metric learned by our model accounts for the immediacy of health risk, mapping patient’s who are at a higher risk of developing health conditions farther from those who are at a lower risk (hence the negative correlation). APPENDIX D : NPLB CONDITION IN MORE DETAIL In this section, we aim to take a closer look at the minimizer of our proposed objective. Using the same notations as in the main manuscript, we define the following variables for convenience: δ+ , d(φa, φp) δ− , d(φa, φn) ρ , d(φp, φn) With this notation, we can rewrite our proposed No Pairs Left Behind objective as: LNPLB = 1 N N∑ (pi,ai,ni)∈T [δ+ − δ− + 0]+ + (ρ− δ−)2 . (A.7) Note that the since [δ+ − δ− + 0]+ ≥ 0,LNPLB = 0 if and only the summation of each term is identically zero. This yields the following relation: − (ρ− δ−)2 = [δ+ − δ− + 0]+ (A.8) which, considering the real solutions, is only valid if ρ = δ−, and if δ− > δ+ + 0, and therefore ρ > δ+ + 0. As a result, the regularization term enforces that the distance between the positive and the negative to be at least as much δ+ + 0, leading to denser clusters that are better separated from other classes in space. NPLB can be very easily implemented using existing implementations in standard libraries. As an example, we provide a Pytorch-like pseudo code showing the implementation of our approach: 1 from typing import Callable, Optional 2 import torch 3 4 class NPLBLoss(torch.nn.Module): 5 6 def __init__(self, 7 triplet_criterion: torch.nn.TripletMarginLoss, 8 metric: Optional[Callable[ 9 [torch.Tensor, torch.Tensor], 10 torch.Tensor]] = torch.nn.functional.pairwise_distance ): 11 """Initializes the instance with backbone triplet and distance metric .""" 12 super().__init__() 13 self.triplet = triplet_criterion 14 self.metric = metric 15 16 def forward(self, anchor: torch.Tensor, positive: torch.Tensor, 17 negative: torch.Tensor) -> torch.Tensor: 18 """Forward method of NPLB loss.""" 19 20 # Traditional triplet as the first component of the loss function. 21 triplet_loss = self.triplet(anchor, positive, negative) 22 # This 23 positive_to_negative = self.metric(positive, negative, keepdim=True) 24 anchor_to_negative = self.metric(anchor, negative, keepdim=True) 25 # Here we use the reduction to be ’mean’, but it can be any kind 26 # that the DL library would support. 27 return triplet_loss + torch.mean( 28 torch.pow((positive_to_negative - anchor_to_negative), 2)) Listing 1: Pytorch implementation of NPLB. APPENDIX E LIFTEDSTRUCT, N-PAIRS LOSS, AND INFONCE In this section, we provide a brief description of three popular deep metric learning models that are related to our work. We also describe the implementations used for these models, and present a complete comparison of all methods on all datasets in this work. In LiftedStruct (Song et al., 2015), the authors propose to take advantage of the full batch for comparing pairs as opposed to traditional approaches where positive and negative pairs are pre-defined for an anchor. The authors describe their approach as "lifting" the vector of pairwise distances for each batch to the matrix of pairwise distances. N-Pair Loss (Sohn, 2016) is a generalization of the traditional triplet loss which aims to address the "slow" convergence of traditional triplet models through considering N − 1 negative examples instead of the one negative pair considered in the traditional approach. InfoNCE (Oord et al., 2018a) is a generalization of N-pair loss that is also known as the normalized temperature-scaled cross entropy loss (NT-Xent). This loss aims to maximize the agreement between positive samples. Both N-Pair and InfoNCE loss relate to our work due to their formulation of the metric learning objective that are closely related to Triplet loss. To compare our approach against these algorithms, we leveraged the widely used Pytorch Metric Learning (PML) package (Musgrave et al., 2020). The complete results of our comparisons on all tested datasets are shown below in Table A2. Table A2: Comparison of state-of-the-art (SOTA) triplet losses with our proposed objective function (complete version of Tables 1 and 3 of the main manuscript). The classifications were done on the embeddings using XGBoost for five different train-test splits, with the average Weighted F1 score reported below. We note that the improved performance of the NPLB-trained model was consistent across different classifiers. The UKB results are for the multi-class classification. MNIST FashionMNIST CIFAR10 UKB (Females) UKB (Males) Trad. Triplet Loss 0.9859 ± 0.0009 0.9394 ± 0.001 0.8036 ± 0.028 0.5874 ± 0.001 0.6861 ± 0.003 N-Pair 0.9863 ± 0.0003 0.9586 ± 0.003 0.7936 ± 0.034 0.6064 ± 0.004 0.6961 ± 0.002 LiftedStruct 0.9853 ± 0.0007 0.9495 ± 0.002 0.7946 ± 0.041 0.5994 ± 0.003 0.6989 ± 0.004 MDR 0.9886 ± 0.0003 0.9557 ± 0.003 0.8152 ± 0.027 0.6047 ± 0.005 0.6964 ± 0.002 InfoNCE 0.9858 ± 0.0002 0.9581 ± 0.004 0.8039 ± 0.026 0.6103 ± 0.002 0.6816 ± 0.003 Distance Swap 0.9891 ± 0.0003 0.9536 ± 0.001 0.8285 ± 0.022 0.5416 ± 0.004 0.6628 ± 0.002 NPLB (Ours) 0.9954 ± 0.0003 0.9664 ± 0.001 0.8475 ± 0.025 0.6642 ± 0.002 0.7845 ± 0.003 Figure A2: Visualization of the data processing scheme described in Section Appendix F. APPENDIX F UK BIOBANK DATA PROCESSING Given the richness and complexity of the UKB and the scope of this work, we subset the data to include patients’ age and gender (demographics), numerous lab metrics (objective features), Metabolic Equivalent Task (MET) scores for vigorous/moderate activity and self-reported hours of sleep (lifestyle) (complete list of features in Appendix M). Additionally, we leverage doctor-confirmed conditions as well as current medication to assess subjects’ health (assigning labels and not used as predictors). After selecting these features, we use the following scheme to partition the subjects (illustrated in Fig. A2): 1. Ensure all features are at least 75% complete (i.e. at least 75% of patients have a non-null value for that feature) 2. Exclude subjects with any null values 3. Split resulting data according to biological sex (male or female), and perform quantile normalization (as in Cohen et al. (2021)) 4. For each sex, partition patients into "unhealthy" (those who have at least one doctorconfirmed health condition or take medication for treating a serious condition) and "apparently healthy" population (who do not have any serious health conditions and do not take medications for treating such illnesses). This data is used for training our neural network. 5. Split patients into six different age groups: Each age group is constructed so that the number of patients in each group is on the same order, while the bias in the data is preserved (age groups are shown in Fig. A2). These age groups are used to determine age-specific references at the time of risk prediction. APPENDIX G : SIMILARITY OF DISTRIBUTIONS FOR KEY METRICS AMONG PATIENTS Although lab metric ranges seem very different at a first glance, look at the age-stratified ranges of tests show similarity among the apparently healthy and unhealthy patients. Additionally, if we further stratify the data based on lifestyle, the similarities between the two health groups becomes even more evident. The additional filtering is as follows: We identify the median sleep hours per group as well as identifying "active" and "less active" individuals. We define active as someone who is moderately active for 150 minutes or vigorously active for 75 minutes per week. We use the additional filtering to further stratify patients in each age group. Below in Fig. A3 and A4 we show examples of these similarities for two age groups chosen at random. These results motivated our approach in identifying the bona fide healthy population to be used as reference points. Figure A3: Distribution similarity of key lab metrics between apparently-healthy and unhealthy female patients. We present the violin plot for Total Cholestrol (left) and LDL Cholestrol (right) for patients between the ages of 36-45 (chosen at random). This figure aims to illustrate the similarity between these distributions based on lifestyle and age. That is, by stratifying the patients based on their sleep and activity we can see that health status alone can not separate the patients well, given the similarity in the signals. APPENDIX H NORMAL RANGES FOR KEY METRICS Below we provide a list of the current "normal" lab ranges for key metrics that determined the bona fide healthy population: Key Biomarker Gender Specific? Range for Males Range for Females Reference Total Cholesterol No ≤ 5.18 mmol L ≤ 5.18 mmol L Link to Reference 1, Link to Reference 2 HDL Yes ≥ 1 mmol L ≥ 1.3 mmol L Link to Reference 1, Link to Reference 2 LDL No ≤ 3.3 mmol L ≤ 3.3 mmol L Link to Reference 1, Link to Reference 2 Triglycerides No ≤ 1.7 mmol L ≤ 1.7 mmol L Link to Reference 1, Link to Reference 2 Fasting Glucose No ∈ [70, 100] mg dL ∈ [70, 100] mg dL Link to Reference 3 HbA1c No < 42 mmol mol < 42 mmol mol Link to Reference 4 C-Reactive Protein No < 10 mg L < 10 mg L Link to Reference 5 APPENDIX I : PERFORMANCE OF SPHR ON MALE SUBJECTS In this section, we present the results of the experiments in the main manuscript (which were done for female patients) for the male patients. The classification results are presented in Tables A3, A4, A5, and A6, and the health risk predictions are shown in Table A7. Figure A4: Distribution similarity of key lab metrics between apparently-healthy and unhealthy female patients. We present the violin plot for HDL Cholestrol (left) and Triglycerides (right) for patients between the ages of 55-60 (age group chosen at random). This figure aims to illustrate the similarity between these distributions based on lifestyle and age. That is, by stratifying the patients based on their sleep and activity we can see that health status alone can not separate the patients well, given the similarity in the signals. Table A3: Comparison of binary classification performance (weighted F1 score) with various representations on the male patients. In this case, we consider the bona fide healthy patients as healthy patients and train each model to predict binary labels. We keep the same random seeds across different classifiers, and for the supervised methods, we randomly split the data into train and test (80-20) five time, and calculate the mean and standard deviation of the accuracies. Our model significantly improves the classification all tested classifiers, demonstrating better separability in space compared to raw data and state-of-the-art method (DeepPatient). Model Not-Transformed ICA PCA DeepPatient SPHR (Ours) KNNs 0.6200 ± 0.004 0.6167 ± 0.003 0.6077 ± 0.003 0.6224 ± 0.001 0.8163 ± 0.002 LDA 0.6275 ± 0.005 0.6227 ± 0.004 0.6227 ± 0.003 0.6385 ± 0.002 0.8141 ± 0.002 NN for EHR 0.5926 ± 0.014 0.6301 ± 0.018 0.6105 ± 0.021 0.6148 ± 0.032 0.8092 ± 0.011 XGBoost 0.5975 ± 0.004 0.5804 ± 0.004 0.6157 ± 0.004 0.6101 ± 0.004 0.8160 ± 0.003 APPENDIX J EFFECTS OF AUGMENTATION ON DML In order to evaluate the effect of augmenting the bona fide population and to determine the appropriate increase fold, we trained SPHR with different levels of augmentation, and evaluated the effect of each increase fold through multi-label classification performance. More specifically, created new augmented datasets with no augmentation, 1×, 3×, 5× and 10× and generated the same number of triplets (as described previously) and trained SPHR. We evaluated the multi-label classification using the same approach and classifiers as before (described in the main manuscript) and present the results of XGBoost classification in Table A8. Based on our findings and considerations for computational efficiency, we chose 3x augmentation as the appropriate fold increase. Table A4: Comparison of binary classification performance (micro F1 score) with various representations on the male patients. In this case, we consider the bona fide healthy patients as healthy patients and train each model to predict binary labels. We keep the same random seeds across different classifiers, and for the supervised methods, we randomly split the data into train and test (80-20) five time, and calculate the mean and standard deviation of the accuracies. Our model significantly improves the classification all tested classifiers, demonstrating better separability in space compared to raw data and state-of-the-art method (DeepPatient). Model Not-Transformed ICA PCA DeepPatient SPHR (Ours) KNNs 0.6490 ± 0.004 0.6480 ± 0.004 0.6469 ± 0.003 0.6639 ± 0.001 0.8185 ± 0.002 LDA 0.6529 ± 0.004 0.6509 ± 0.002 0.6509 ± 0.002 0.6664 ± 0.003 0.8138 ± 0.002 NN for EHR 0.6345 ± 0.003 0.6419 ± 0.003 0.6380 ± 0.003 0.6527 ± 0.005 0.8176 ± 0.004 XGBoost 0.6573 ± 0.002 0.6488 ± 0.003 0.6469 ± 0.003 0.6701 ± 0.003 0.8180 ± 0.003 Table A5: Comparison of multi-label classification accuracy (weighted F1 score) with various representations on the male patients. We keep the same random seeds across different classifiers, and for the supervised methods, we randomly split the data into train and test (80-20) five time, and calculate the mean and standard deviation of the accuracies. Our model significantly improves the classification for all tested classifiers, demonstrating better separability in space compared to raw data and state-of-the-art method (DeepPatient). Model Not-Transformed PCA ICA DeepPatient SPHR (Ours) KNNs 0.5852 ± 0.005 0.5820 ± 0.005 0.5734 ± 0.003 0.5834 ± 0.002 0.7819 ± 0.001 LDA 0.6011 ± 0.004 0.5953 ± 0.003 0.5952 ± 0.002 0.6080 ± 0.003 0.7865 ± 0.002 NN for EHR 0.5926 ± 0.004 0.5925 ± 0.004 0.5838 ± 0.003 0.5918 ± 0.001 0.7884 ± 0.005 XGBoost 0.5439 ± 0.005 0.5583 ± 0.004 0.5587 ± 0.005 0.5896 ± 0.003 0.7845 ± 0.003 APPENDIX K ADDITIONAL DETAILS ON MODEL ACRCHITECTURES K.1 SPHR’S NEURAL NETWORK Figure A5: Architecture of SPHR. Our neural network is composed of three hidden layers, probablistic dropouts (p = 0.1) and nonlinear activations (PReLU) in between. In the figure above, b, n denote to the number of patients and features, respectively, with d being the output dimension (in our case d = 32). For readability and reproducibility purposes, we also include a Pytorch snippet of the network used for learning representations from the UK Biobank: 1 import torch.nn as nn 2 3 class SPHR(nn.Module): 4 def __init__(self, input_dim:int = 64, output_dim:int= 32): 5 self.inp_dim = input_dim 6 self.out_dim = output_dim 7 super().__init__() Table A6: Comparison of multi-label classification accuracy (micro F1 score) with various representations on the male patients. We keep the same random seeds across different classifiers, and for the supervised methods, we randomly split the data into train and test (80-20) five time, and calculate the mean and standard deviation of the accuracies. Our model significantly improves the classification for all tested classifiers, demonstrating better separability in space compared to raw data and state-of-the-art method (DeepPatient). Model Not-Transformed PCA ICA DeepPatient SPHR (Ours) KNNs 0.6364 ± 0.004 0.6355 ± 0.004 0.6358 ± 0.004 0.6358 ± 0.001 0.7921 ± 0.001 LDA 0.6405 ± 0.003 0.6393 ± 0.002 0.6392 ± 0.003 0.6383 ± 0.003 0.7811 ± 0.002 NN for EHR 0.6345 ± 0.003 0.6342 ± 0.003 0.6342 ± 0.004 0.6438 ± 0.001 0.7930 ± 0.003 XGBoost 0.6409 ± 0.003 0.6380 ± 0.003 0.6384 ± 0.003 0.6403 ± 0.002 0.7940 ± 0.003 Table A7: The percentage of apparently-healthy male patients who develop conditions in the next immediate visit within each predicted risk group. Among all methods (top three shown), SPHR-predicted Normal and High risk patients developed the fewest and most conditions, respectively, as expected. P0 (Not-Transformed) DeepPatient SPHR (Ours) Future Diagnosis Normal LR HR Normal LR Higher Risk Normal LR HR Cancer 2.76% 0.62% 2.75% 2.86% 0.33% <0.1% 1.52% 2.96% 4.14% Diabetes 1.88% 1.94% 1.63% 0.85% 0.73% 1.62% 0.73% 0.55% 5.29% Other Serious Cond 9.57% 6.44% 9.47% 9.33% 4.41% 7.28% 2.73% 8.60% 12.45% 8 self.nonlinear_net = nn.Sequential( 9 nn.Linear(self.inp_dim,512), 10 nn.Dropout(p=0.1), 11 nn.PReLU(), 12 nn.Linear(512, 256), 13 nn.Dropout(p=0.1), 14 nn.PReLU(), 15 nn.Linear(256,self.out_dim), 16 nn.PReLU() 17 ) 18 19 def forward_oneSample(self, input_tensor): 20 # useful for the forward method call and for inference 21 return self.nonlinear_net(input_tensor) 22 23 def forward(self, positive, anchor, negative): 24 ## forward method for training 25 return self.forward_oneSample(positive), self.forward_oneSample( anchor), self.forward_oneSample(negative) Listing 2: SPHR’s network architecture We train SPHR by minimizing our proposed NPLB objective, Eq.(3), using the Adam optimizer for 1000 epochs with lr = 0.001, and employ an exponential learning rate decay (γ = 0.95) to decrease the learning rate after every 50 epochs. We set the margin hyperparameter to be ◦ = 1. In all experiments, the triplet selection was done in an offline manner using the most common triplet selection scheme (e.g. see https://www.kaggle.com/code/hirotaka0122/tripletloss-with-pytorch?scriptVersionId=26699660&cellId=6). K.2 CIFAR-10 EMBEDDING NETWORK AND EXPERIMENTAL SETUP To further demonstrate the improvements of NPLB on representation learning, we benchmarked various triplet losses on CIFAR10 as well. For this experiment, we trained a randomly-initialized VGG13 (Simonyan & Zisserman, 2015) model (not pre-trained) on CIFAR10 to produce embeddings m ∈ R128 for 200 epochs, using the Adam optimizer with lr = 0.001 and a decaying schedule (similar to SPHR’s optimization setting, as described in the main manuscript). We note that the architecture used in this experiment is identical to a traditional classification VGG model, with the difference being in the "classification" layer which is re-purposed for producing 128-dimensional embeddings. CIFAR-10 images were normalized using the standard CIFAR-10 transformation, and were not agumented during trainnig (i.e. we did not use any augmentations in training the model). Table A8: Studying the effects of different augmentation levels on classification as a proxy for all downstream tasks. We followed the same procedure as all other classification experiments (including model parameters). The results for 1× augmentation are omitted since the results are very similar to no augmentation. No Augmentation 3× 5× 10× Females: Multi-Label 0.5730 ± 0.002 0.6642 ± 0.002 0.6619 ± 0.003 0.6584 ± 0.003 Males: Multi-Label 0.6247 ± 0.005 0.7845 ± 0.003 0.7852 ± 0.004 0.7685 ± 0.002 K.3 MNIST EMBEDDING NETWORK For ease of readability and reproducibility, we provide the architecture used for MNIST as a Pytorch snippet: 1 import torch.nn as nn 2 3 class MNIST_Network(nn.Module): 4 def __init__(self, embedding_dimension=2): 5 super().__init__() 6 self.conv_net = nn.Sequential( 7 nn.Conv2d(1, 32, 5), 8 nn.PReLU(), 9 nn.MaxPool2d(2, stride=2), 10 nn.Dropout(0.3), 11 nn.Conv2d(32, 64, 5), 12 nn.PReLU(), 13 nn.MaxPool2d(2, stride=2), 14 nn.Dropout(0.3) 15 ) 16 17 self.feedForward_net = nn.Sequential( 18 nn.Linear(64*4*4, 512), 19 nn.PReLU(), 20 nn.Linear(512, embedding_dimension) 21 ) 22 23 def forward(self, input_tensor): 24 conv_output = self.conv_net(input_tensor) 25 conv_output = conv_output.view(-1, 64*4*4) 26 return self.feedForward_net(conv_output) Listing 3: Network architecture used for validation on MNIST. We train the network for 50 epochs using the Adam optimizer with lr = 0.001. We set the margin hyperparameter to be ◦ = 1. K.4 FASHION MNIST EMBEDDING NETWORK For readability and reproducibility purposes, we provide the architecture used for Fashion MNIST as a Pytorch snippet: 1 import torch.nn as nn 2 3 class FMNIST_Network(nn.Module): 4 def __init__(self, embedding_dimension=128): 5 super().__init__() 6 self.conv_net = nn.Sequential( 7 nn.Conv2d(in_channels=1, out_channels=16, kernel_size=3), 8 nn.PReLU(), 9 nn.MaxPool2d(2, stride=2), 10 nn.Dropout(0.1), 11 nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5), 12 nn.PReLU(), 13 nn.MaxPool2d(2, stride=1), 14 nn.Dropout(0.2), 15 nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5), 16 nn.AvgPool2d(kernel_size=1), 17 nn.PReLU() 18 ) 19 20 self.feedForward_net = nn.Sequential( 21 nn.Linear(64*4*4, 512), 22 nn.PReLU(), 23 nn.Linear(512, embedding_dimension) 24 ) 25 26 def forward(self, input_tensor): 27 conv_output = self.conv_net(input_tensor) 28 conv_output = conv_output.view(-1, 64*4*4) 29 return self.feedForward_net(conv_output) Listing 4: Network architecture used for validation on Fashion MNIST. We train the network for 50 epochs using the Adam optimizer with lr = 0.001. We set the margin hyperparameter to be ◦ = 1. K.5 CLASSIFICATION MODELS The parameters for NN for EHR were chosen based on Chen et al. (2020a). The "main" parameters for KNN and XGBoost were chosen through a randomized grid search while parameters for LDA were unchanged. We specify the parameters that were identified through grid search in the KNN and XGBoost sections. K.5.1 NN FOR EHR We follow the work of Chen et al. (2020a) and construct a feed forward neural network with the additive attention mechanism in the first layer. As in Chen et al. , we choose the learning rate to be lr = 0.001 with an L2 penalty coefficient λ = 0.001, and train the model for 100 epochs. K.5.2 KNNS We utilized the Scikit-Learn implementation of K-Nearest Neighbors. The optimal number of neighbors were found with grid search from 10 to 100 neighbors (increasing by 10). For the sake of reproducibility, we provide the parameters with scikit-learn terminology. For more information about the meaning of each parameter (and value), we refer the reviewers to the online documentation: https://scikit-learn.org/stable/modules/generated/sklearn.neighbo rs.KNeighborsClassifier.html. • Algorithm: Auto • Leaf Size: 30 • Metric: Minkowski • Metric Params: None • n Jobs: -1 • n Neighbors: 50 • p: 2 • Weights’: Uniform K.5.3 LDA We employed the Scikit-Learn implementation of Linear Discriminant Analysis (LDA), with the default parameters. K.5.4 XGBOOST We utilized the official implementation of XGBoost, located at: https://xgboost.readthed ocs.io/en/stable/. We optimized model performance through grid search for learning rate (0.01 to 0.2, increasing by 0.01), max depth (from 1 to 10, increasing by 1) and number of estimators (from 10 to 200, increasing by 10). For the sake of reproducibility, we provide the parameters the nomenclature used in the online documentation. • Objective: Binary-Logistic • Use Label Encoder: False • Base Score: 0.5 • Booster: gbtree • Callbacks: None, • colsample_by_level:1 • colsample_by_node:1 • colsample_by_tree":1 • Early Stopping Rounds: None • Enable Categorical: False • Evaluation metric: None • γ (gamma): 0 • GPU ID: -1 • Grow Policy: depthwise’ • Importance Type: None • Interaction Constraints: " " • Learning Rate: 0.05 • Max Bin: 256 • Max Categorical to Onehot: 4 • Max Delta Step: 0 • Max Depth: 4 • Max Leaves: 0 • Minimum Child Weight: 1 • Missing: NaN • Monotone Constraints: ’()’ • n Estimators: 50 • n Jobs: -1 • Number of Parallel Trees: 1 • Predictor: Auto • Random State: 0 • reg_alpha:0 • reg_lambda:1 • Sampling Method: Uniform • scale_pos_weight:1 • Subsample: 1 • Tree Method: Exact • Validate Parameters: 1 • Verbosity: None Algorithm 1 Proposed Augmentation of Electronic Health Records Data. The proposed strategy will ensure that each augmented feature falls between pre-determined ranges for the appropriate gender and age group, which are crucial in diagnosing conditions. Require: Xdict: A mapping between gender/age condition groups to raw bloodwork and lifestyle matricies Require: condlist: A list of all present conditions # e.g. bona fide healthy, diabetic, etc. Require: U : A matrix storing upper bounds for featurej given conditioni Require: L: A matrix of the lower bounds for featurej given conditioni 1: X̃dict ← Zeros(Xdict) 2: for conditioni in condlist, do 3: for featurej in Xdict[conditioni], do 4: µ←Mean(featurej) 5: σ ← STD(featurej) # Standard deviation 6: z ← -1016 # initialize 7: while z 6∈ [Lij , Uij ], do 8: z←∼ N (µ, σ) # sampled value from the Gaussian distribution 9: end while 10: X̃dict[conditioni][featurej ]← z # augmented feature 11: end for 12: end for APPENDIX L DATA AUGMENTATION SCHEME APPENDIX M : COMPLETE LIST OF FEATURES M.1 UKB FID TO NAME MAPPINGS FOR FEMALE PATIENTS Lab Metrics 21003: Age 30160: Basophill count 30220: Basophill percentage 30150: Eosinophill count 30210: Eosinophill percentage 30030: Haematocrit percentage 30020: Haemoglobin concentration 30300: High light scatter reticulocyte count 30290: High light scatter reticulocyte percentage 30280: Immature reticulocyte fraction 30120: Lymphocyte count 30180: Lymphocyte percentage 30050: Mean corpuscular haemoglobin 30060: Mean corpuscular haemoglobin concentration 30040: Mean corpuscular volume 30100: Mean platelet (thrombocyte) volume 30260: Mean reticulocyte volume 30270: Mean sphered cell volume 30080: Platelet count 30110: Platelet distribution width 30010: Red blood cell (erythrocyte) count 30070: Red blood cell (erythrocyte) distribution width 30250: Reticulocyte count 30240: Reticulocyte percentage 30000: White blood cell (leukocyte) count 30620: Alanine aminotransferase 30600: Albumin 30610: Alkaline phosphatase 30630: Apolipoprotein A 30640: Apolipoprotein B 30650: Aspartate aminotransferase 307
1. What is the focus and contribution of the paper on deep metric learning? 2. What are the strengths of the proposed approach, particularly in its motivation and derivation? 3. What are the weaknesses of the paper regarding its clarity, hyperparameters, and empirical evaluations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper A variation of a triplet loss for deep metric learning, dubbed NPLB, is proposed. It is inspired by the condition that distance between positive and negative examples should be bigger than distance between anchor and positive. It is illustrated that optimizing such objective leads to more compact clusters. Empirical results on benchmark datasets (MNIST and fashionMNIST) and Biobank dataset show increased prediction accuracy (weighted F1 score), when using embeddings from NPLB. Strengths And Weaknesses Strengths Motivation for, and derivation of the new triplet objective is clear. Empirical results for cluster density and separability, as well as embeddings utility in downstream prediction tasks looks compelling Weaknesses At moments it is not easy to follow the flow of the paper. Last paragraph of the Introduction appears to have redundancy in summarizing the main 3 aspects of the contribution: novel variation of triplet objective, embeddings’ utility for classification tasks, and distance based risk indicators. Claim that this approach does not require additional hyperparameters, as it is based on distances, might be misleading. In equation 3, the added “regularization term”, ie the square of distance, could have a hyperparameter multiplicator to tune the impact on overall loss. Just in this case it is set to 1 (similarly as it is set to 0.1 in MDR case). Minor suggestions Equation 1 has an extra right bracket. Clarity, Quality, Novelty And Reproducibility The paper seems to contribute incrementally to the spectrum of triplet loss objectives. The empirical evaluation setting could have been explained a bit more clearly. For example, it is not clear if all the test subjects are covered by classes “Normal”, “Low Risk” and “High Risk” in section 5.3? Why is the dataset split based on the gender and presented separately for female (and male in appendix)? Also, why is DeepPatient transformation R^n -> R^n and not R^n -> R^d like others? Given that repository will be shared, the results seem reproducible.
ICLR
Title No Pairs Left Behind: Improving Metric Learning with Regularized Triplet Objective Abstract We propose a novel formulation of the triplet objective function that improves metric learning without additional sample mining or overhead costs. Our approach aims to explicitly regularize the distance between the positive and negative samples in a triplet with respect to the anchor-negative distance. As an initial validation, we show that our method (called No Pairs Left Behind [NPLB]) improves upon the traditional and current state-of-the-art triplet objective formulations on standard benchmark datasets. To show the effectiveness and potentials of NPLB on real-world complex data, we evaluate our approach on a large-scale healthcare dataset (UK Biobank), demonstrating that the embeddings learned by our model significantly outperform all other current representations on tested downstream tasks. Additionally, we provide a new model-agnostic single-time health risk definition that, when used in tandem with the learned representations, achieves the most accurate prediction of subjects’ future health complications. Our results indicate that NPLB is a simple, yet effective framework for improving existing deep metric learning models, showcasing the potential implications of metric learning in more complex applications, especially in the biological and healthcare domains. Our code package as well as tutorial notebooks is available on our public repository: <revealed after the double blind reviews>. 1 INTRODUCTION Metric learning is the task of encoding similarity-based embeddings where similar samples are mapped closer in space and dissimilar ones afar (Xing et al., 2002; Wang et al., 2019; Roth et al., 2020). Deep metric learning (DML) has shown success in many domains, including computer vision (Hermans et al., 2017; Vinyals et al., 2016; Wang et al., 2018b) and natural language processing (Reimers & Gurevych, 2019; Mueller & Thyagarajan, 2016; Benajiba et al., 2019). Many DML models utilize paired samples to learn useful embeddings based on distance comparisons. The most common architectures among these techniques are the Siamese (Bromley et al., 1993) and triplet networks (Hoffer & Ailon, 2015). The main components of these models are the: (1) Strategies for constructing training tuples and (2) objectives that the model must minimize. Though many studies have focused on improving sampling strategies (Wu et al., 2017; Ge, 2018; Shrivastava et al., 2016; Kalantidis et al., 2020; Zhu et al., 2021), modifying the objective function has attracted less attention. Given that learning representations with triplets very often yield better results than pairs using the same network (Hoffer & Ailon, 2015; Balntas et al., 2016), our work focuses on improving triplet-based DML through a simple yet effective modification of the traditional objective. Modifying DML loss functions often requires mining additional samples or identifying new quantities (e.g. identifying class centers iteratively throughout training (He et al., 2018)) or computing quantities with costly overheads (Balntas et al., 2016), which may limit their applications. In this work, we aim to provide an easy and intuitive modification of the traditional triplet loss that is motivated by metric learning on more complex datasets, and the notion of density and uniformity of each class. Our proposed variation of the triplet loss leverages all pairwise distances between existing pairs in traditional triplets (positive, negative, and anchor) to encourage denser clusters and better separability between classes. This allows for improving already existing triplet-based DML architectures using implementations in standard deep learning (DL) libraries (e.g. TensorFlow), enabling a wider usage of the methods and improvements presented in this work. Many ML algorithms are developed for and tested on datasets such as MNIST (LeCun, 1998) or ImageNet (Deng et al., 2009), which often lack the intricacies and nuances of data in other fields, such as health-related domains (Lee & Yoon, 2017). Unfortunately, this can have direct consequences when we try to understand how ML can help improve care for patients (e.g. diagnosis or prognosis). In this work, we demonstrate that DML algorithms can be effective in learning embeddings from complex healthcare datasets. We provide a novel DML objective function and show that our model’s learned embeddings improve downstream tasks, such as classifying subjects and predicting future health risk using a single-time point. More specifically, we build upon the DML-learned embeddings to formulate a new mathematical definition for patient health-risks using a single time point which, to the best of our knowledge, does not currently exist. To show the effectiveness of our model and health risk definition, we evaluate our methodology on a large-scale complex public dataset, the UK Biobank (UKB) (Bycroft et al., 2018), demonstrating the implications of our work for both healthcare and the ML community. In summary, our most important contributions can be described as follows. 1) We present a novel triplet objective function that improves model learning without any additional sample mining or overhead computational costs. 2) We demonstrate the effectiveness of our approach on a large-scale complex public dataset (UK Biobank) and on conventional benchmarking datasets (MNIST, Fashion MNIST (Xiao et al., 2017) and CIFAR10 (Krizhevsky, 2010)). This demonstrates the potential of DML in other domains which traditionally may have been less considered. 3) We provide a novel definition of patient health risk from a single time point, demonstrating the real-world impact of our approach by predicting current healthy subjects’ future risks using only a single lab visit, a challenging but crucial task in healthcare. 2 BACKGROUND AND RELATED WORK Contrastive learning aims to minimize the distance between two samples if they belong to the same class (are similar). As a result, contrastive models require two samples to be inputted before calculating the loss and updating their parameters. This can be thought of as passing two samples to two parallel models with tied weights, hence being called Siamese or Twin networks (Bromley et al., 1993). Triplet networks (Hoffer & Ailon, 2015) build upon this idea to rank positive and negative samples based on an anchor value, thus requiring the model to produce mappings for all three before the optimization step (hence being called triplets). Modification of Triplet Loss: Due to their success and importance, triplet networks have attracted increasing attention in recent years. Though the majority of proposed improvements focus on the sampling and selection of the triplets, some studies (Balntas et al., 2016; Zhao et al., 2019; Kim & Park, 2021; Nguyen et al., 2022) have proposed modifications of the traditional triplet loss proposed in Hoffer & Ailon (2015). Similar to our work, Multi-level Distance Regularization (MDR) (Kim & Park, 2021) seeks to regularize the DML loss function. MDR regularizes the pairwise distances between embedding vectors into multiple levels based on their similarity. The goal of MDR is to disturb the optimization of the pairwise distances among examples and to discourage positive pairs from getting too close and the negative pairs from being too distant. A drawback of regularization methods is the choice of hyperparameter that balances the regularization term, though adaptive balancing methods could be used (Chen et al., 2018; Heydari et al., 2019). Most related to our work, Balntas et al. (2016) modified the traditional objective by explicitly accounting for the distance between the positive and negative pairs (which the traditional triplet function does not consider), and applied their model to learn local feature descriptors using shallow convolutional neural networks. They introduce the idea of "in-triplet hard negative", referring to the swap of the anchor and positive sample if the positive sample is closer to the negative sample than the anchor, thus improving on the performance of traditional triplet networks (we refer to this approach as Distance Swap). Though this method uses the distance between the positive and negative samples to choose the anchor, it does not explicitly enforce the model to regularize the distance between the two, which was the main issue with the original formulation. Our work addresses this pitfall by using the notion of local density and uniformity (defined later in §3) to explicitly enforce the regularization of the distance between the positive and negative pairs using the distance between the anchors and the negatives. As a result, our approach ensures better inter-class separability while encouraging denser intra-class embeddings. In addition to MDR and Swap Distance, we benchmark our approach againt three related and widely-used metric learning algorithms, namely LiftedStruct Song et al. (2015), N-Pair Loss Sohn (2016), and InfoNCE Oord et al. (2018a). Due to the space constraints, and given the popularity of the methods, we provide an overview of these algorithms in Appendix E Deep Learned Embeddings for Healthcare: Recent years have seen an increase in the number of DL models for Electronic Health Records (EHR) with several methods aiming to produce rich embeddings to better represent patients (Rajkomar et al., 2018; Choi et al., 2016b; Tran et al., 2015; Nguyen et al., 2017; Choi et al., 2016a; Pham et al., 2017). Though most studies in this area consider temporal components, DeepPatient (Miotto et al., 2016) does not explicitly account for time, making it an appropriate model for comparison with our representation learning approach given our goal of predicting patients’ health risks using a single snapshot. DeepPatient is an unsupervised DL model that seeks to learn general deep representations by employing three stacks of denoising autoencoders that learn hierarchical regularities and dependencies through reconstructing a masked input of EHR features. We hypothesize that learning patient reconstructions alone (even with masking features) does not help to discriminate against patients based on their similarities. We aim to address this by employing a deep metric learning approach that learns similarity-based embeddings. Predicting Patient’s Future Health Risks: Assessing patients’ health risk using EHR remains a crucial, yet challenging task of epidemiology and public health (Li et al., 2015). An example of such challenges are the clinically-silent conditions, where patients fall within "normal" or "borderline" ranges for specific known blood work markers, while being at the risk of developing chronic conditions and co-morbidities that will reduce quality of life and cause mortality later on (Li et al., 2015). Therefore, early and accurate assessment of health risk can tremendously improve the patient care, specially in those who may appear "healthy" and do not show severe symptoms. Current approaches for assessing future health complications tie the definition of health risks to multiple time points (Hirooka et al., 2021; Chowdhury & Tomal, 2022; Razavian et al., 2016; Kamal et al., 2020; Cohen et al., 2021; Che et al., 2017). Despite the obvious appeal of such approaches, the use of many visits for modeling and defining risk simply ignores a large portion of patients who do not return for subsequent check ups, especially those with lower incomes and those without adequate access to healthcare (Kullgren et al., 2010; Taani et al., 2020; Nishi et al., 2019). Given the importance of addressing these issues, we propose a mathematical definition (that is built upon DML) based on a single time point, which can be used to predict patient health risk from a single lab visit. 3 METHODS Main Idea of No Pairs Left Behind (NPLB): The main idea behind our approach is to ensure that, during optimization, the distance between positive pi and negative samples ni is considered, and regularized with respect to the anchors ai (i.e. explicitly introducing a notion of distance between d(pi, ni) which depends on d(ai, ni)). We visualize this idea in Fig. 1. The mathematical intuition behind our approach can be described by considering in-class local density and uniformity, as introduced in Rojas-Thomas & Santos (2021) for unsupervised clustering evaluation metric. Given a metric learning model φ, let local density of a sample pi be defined as LD(pi)pi∈ck = min{d(φ(pi), φ(pj))}, ∀pi ∈ ck and i 6= j, and let AD(ck) be the average local density of all point in class ck. An ideal operator φ would produce embeddings that are compact while well separated from other classes, or that the in-class embeddings are uniform. This notion of uniformity, is proportional to the difference between the local and average density of each class, i.e. Unif(ck) = {∑|ck| i |LD(pi)−AD(ck)| AD(ck)+ξ if |ck| > 1 0 Otherwise . for 0 < ξ 1. However, computing density and uniformity of classes is only possible post-hoc once all labels are present and not feasible during training if the triplets are mined in a self-supervised manner. To reduce the complexity and allow for general use, we utilize proxies for the mentioned quantities to regularize the triplet objective using the notion of uniformity. We take the distance between positive and negative pairs as inversely proportional to the local density of a class. Similarly, the distance between anchors and negative pairs is closely related to the average density, given that a triplet model maps positive pairs inside an ◦-ball of the anchor ( ◦ being the margin). In this sense, the uniformity of a class is inversely proportional to |d(φ(pi), φ(ni))−d(φ(ai), φ(ni))|. NPLB Objective: Let φ(·) denote an operator and T be the set of triplets of the form (pi, ai, ni) (positive, anchor and negative tensors) sampled from a mini-batch B with size N . For the ease of notation, we will write φ(qi) as φq. Given a margin ◦ (a hyperparameter), the traditional objective function for a triplet network is shown in Eq. (1): LTriplet = 1 N N∑ (pi,ai,ni)∈T [d(φa, φp)− d(φa, φn) + ◦]+ (1) with [·]+ = max{·, 0} and d(·) being the Euclidean distance. Minimizing Eq. (1) only ensures that the negative pairs fall outside of an ◦-ball around the ai, while bringing the positive sample pi inside of this ball (illustrated in Fig. 1), satisfying d(φa, φn) > d(φa, φp) + ◦. However, this objective does not explicitly account for the distance between positive and negative samples, which can impede performance especially when there exists high in-class variability. Motivated by our main idea of having denser and more uniform in-class embeddings, we add a simple regularization term to address the issues described above, as shown in Eq. (2) LNPLB = 1 N N∑ (pi,ai,ni)∈T [d(φa, φp)− d(φa, φn) + ◦]+ + [d(φp, φn)− d(φa, φn)]p , (2) where p ∈ N and NPLB refers to "No Pairs Left Behind." The regularization term in Eq. (2) enforces positive and negative samples to be roughly the same distance away as all other negative pairings, while still minimizing their distance to the anchor values. However, if not careful, this approach could result in the model learning to map ni such that d(φa, φp) > max{ ◦, d(φp, φn)}, which would ignore the triplet term, resulting in a minimization problem with no lower bound1. To avert such issues, we restrict p = 2 (or generally, p ≡ 0 (mod 2)) as in Eq. (3). LNPLB = 1 N N∑ (pi,ai,ni)∈T [d(φa, φp)− d(φa, φn) + ◦]+ + [d(φp, φn)− d(φa, φn)]2 , (3) Note that this formulation does not require mining of any additional samples nor complex computations since it just uses the existing samples in order to regularize the embedded space. Moreover LNPLB = 0 =⇒ − [d(φp, φn)− d(φa, φn)]2 = [d(φa, φp)− d(φa, φn) + ◦]+ which, considering only the real domain, is possible if and only if d(φp, φn) = d(φa, φn), and d(φa, φn) ≥ d(φa, φp) + ◦, explicitly enforcing separation between negative and positive pairs. 1The mentioned pitfall can be realized by taking p = 1, i.e. L(pi, ai, ni) = 1 N N∑ (pi,ai,ni)∈T [d(φa, φp)− d(φa, φn)) + ◦]+ + [d(φp, φn)− d(φa, φn)]. In this case, the model can learn to map ni and ai such that d(φa, φn) > C where C = max{d(φp, φn), d(φa, φp) +m}, resulting in L < 0. 4 VALIDATION OF NPLB ON STANDARD DATASETS Prior to testing our methodology on healthcare data, we validate our derivations and intuition on common benchmark datasets, namely MNIST, Fashion MNIST and CIFAR10. To assess the improvement gains from the proposed objective, we refrained from using more advanced triplet construction techniques and followed the most common approach of constructing triplets using the labels offline. We utilized the same architecture and training settings for all experiments, with the only difference per dataset being the objective functions (see Appendix K for details on each architecture). After training, we evaluated our approach quantitatively through assessing classification accuracy of embeddings produced by optimizing the traditional triplet, Swap Distance and our proposed NPLB objective. The results for each data are presented in Table 1, showing that our approach improves classification. We also assessed the embeddings qualitatively: Given the simplicity of MNIST, we designed our model to produce two-dimensional embeddings which we directly visualized. For Fashion MNIST and CIFAR, we generated embeddings in R64 and R128, respectively, and used Uniform Manifold Approximation and Projection (UMAP) (McInnes et al., 2018) to reduce the dimensions for visualizations, as shown in Fig. 2. Our results show that networks trained on NPLB objective produce embeddings that are denser and well separated in space, as desired. 5 IMPROVING PATIENT REPRESENTATION LEARNING In this section, we aim to demonstrate the potential and implications of our approach on a more complex dataset in three steps: First, we show that deep metric learning improves upon current state-of-the-art patient embedding models (§5.1). Next, we provide a comparison between NPLB, Distance Swap and the traditional triplet loss formulations (§5.2). Lastly, we apply our methodology to predict health risks of currently healthy subjects from a single time point (§5.3). We focus on presenting results for the female subjects due to space limitations. We note that results on male subjects are very similar to the female population, as presented in Appendix I. 5.1 DEEP METRIC LEARNING FOR BETTER PATIENT EMBEDDINGS Healthcare datasets are considerably different than those in other domains. Given the restrictions on sharing health-related data (as stipulated by laws such as those defined under the Health Insurance Portability and Accountability Act - HIPAA), most DL-based models are developed and tested on proprietary in-house datasets, making comparisons and benchmarking a major hurdle (Evans, 2016). This is in contrast to other areas of ML which have established standard datasets (e.g. ImageNet or GLUE (Wang et al., 2018a)). To show the feasibility of our approach, we present the effectiveness of our methodology on the United Kingdom Biobank (UKB) (Bycroft et al., 2018): a large-scale (∼ 500K subjects) complex public dataset, showing the potential of UKB as an additional benchmarking that can be used for developing and testing future DL models in the healthcare domain. UKB contains deep genetic and phenotypic data from approximately 500,000 individuals aged between 39-69 across the United Kingdom, collected over many years. We considered patients’ lab tests and their approximated activity levels (e.g. moderate or vigorous activity per week) as predictors (complete list of features used is shown in Appendix M), and their doctor-confirmed conditions and medication history for determining labels. Specifically, we labeled a patient as "unhealthy" if they have confirmed conditions or take medication for treating a condition, and otherwise labeled them as "apparently-healthy". We provide a step-by-step description of our data processing in Appendix F. A close analysis of the UKB data revealed large in-class variability of test ranges, even among those with no current or prior confirmed conditions (the "apparently-healthy" subjects). Moreover, the overall distribution of key metrics are very similar between the unhealthy and apparently-healthy patients (visualized in Appendix G). As a result, we hypothesized that there exists a continuum among patients’ health states, leading to our idea that a similarity-based learned embedding can represent subjects better than other representations for downstream tasks. This idea, in tandem with our assumption of intricate nonlinear relationships among features, naturally motivated our approach of deep metric learning: our goal is to train a model that learns a metric for separating patients in space, based on their similarities and current confirmed conditions (labels). Due to our assumptions, we used the apparently-healthy patients initially as anchor points between the two ends of the continuum (the very unhealthy and healthy). However, this formulation necessitates identifying a more "reliable" healthy group, often referred to as the bona fide healthy (BFH) group (Cohen et al., 2021)2. To find the BFH population, we considered all patients whose key lab tests for common conditions fall within the clinically-normal values. These markers are: Total Cholesterol, HDL Cholesterol, LDL Cholesterol, Triglycerides, Fasting Glucose, HbA1c, C-Reactive Protein.; we refer to this set of metrics as the P0 metrics and provide the traditional "normal" clinical ranges in Appendix H. It is important to note that the count of the BFH population is much smaller than the apparently-healthy group (∼ 6% and ∼ 5% of female and male populations, respectively). To address this issue and to keep DML as the main focus, we implemented a simple yet intuitive rejection-based sampling to generate synthetic BFH patients, though more sophisticated methods could be employed in future work. Similar to any other rejection-based sampling and given that lab results often follow a Gaussian distribution (Whyte & Kelly, 2018), we assumed that each feature follows a distribution N (µx, σx) where µx and σx denote the empirical mean and standard deviation of feature xi for all patients. Since BFH patients are selected if their P0 biosignals fall within the clinically-normal lab ranges, we used the bounds of the clinically normal range as the accept/reject criteria. Our simple rejection-based sampling scheme is presented in Appendix L. Training Procedure and Model Architecture: Before training, we split the data 70% : 30% for training and testing. In the training partition, we augment the bona fide population 3 folds and then generate 100K triplets of the form (ai, pi, ni) randomly in an offline manner. We chose to generate 2Although it is possible to further divide each group (e.g. based on conditions), we chose to keep the patients in three very general groups to show the feasibility of our approach in various health-related domains. only 100K triplets in order to reduce training time and demonstrate the capabilities of our approach for smaller datasets. Note that in an unsupervised setting, these triplets need to be generated online via negative sample mining, but this is out of scope for this work given that we have labels a priori . Our model consists of the three hidden layers with two probabilistic dropout layers in between and Parametric Rectified Linear Units (PReLU) (He et al., 2015) nonlinear activations. We present a visual representation of our architecture and the dimensions in Fig. A5. We optimize the weights for minimizing our proposed NPLB objective, Eq.(3), using Adam (Kingma & Ba, 2014) for 1000 epochs with lr = 0.001, and employ an exponential learning rate decay (γ = 0.95) to decrease the learning rate after every 50 epochs, and set the triplet margin to ◦ = 1. For simplicity, we will refer to our model as SPHR (Similarity-based Patient Risk modeling). In order to test the true capability of our model, all evaluations are performed on the non-augmented data. Results: Similar to §4, we evaluated our deep-learned embeddings on its improvements for binary (unhealthy or apparently-healthy) and multi-class (unhealthy, apparently-healthy or bona fide healthy) classification tasks. The idea here is that if SPHR has learned to separate patients based on their conditions and similarities, then training classifiers on SPHR-produced embeddings should show improvements compared to raw data. We trained five classifiers (k-nearest neighbors (KNN), Linear Discriminate Analysis (LDA), a neural network (NN) for EHR (Chen et al. (2020a)), and XGBoost Chen & Guestrin (2016)) on raw data (not-transformed) and other common transformations. These transformations include a linear transformation (Principal Component analysis [PCA]), a non-linear transformation (Diffusion Maps (Coifman & Lafon (2006) [DiffMap]) and the current state-of-the-art nonlinear transformation, DeepPatient. DeepPatient, PCA, DiffMap, and SPHR are Rn → Rd, where n, d ∈ N denote the initial number of features and a (reduced) mapping space, respectively, with d = 32 (in order to have the same dimensionality as SPHR), though various choices of d yielded similar results. We present these results in Table 2 comparing classification weighted F1 score for models trained on raw EHR, linear and nonlinear transformations. We also evaluate the separability qualitatively using UMAP, as shown in Fig. A1. In all tested cases, our model significantly outperforms all other transformations, demonstrating the effectiveness of DML in better representing patients from EHR. 5.2 NPLB SIGNIFICANTLY IMPROVES LEARNING ON COMPLEX DATA One of our main motivations for modifying the triplet object was to improve model performance on more complex datasets with larger in-class variability. To evaluate the improvements provided by our NPLB objective, we perform the same analysis as in §4, but this time on the UKB data. Table 3 demonstrate the significant improvement made by our simple modification to the traditional triplet loss, further validating our approach and formulation experimentally. 5.3 PREDICTING HEALTH RISKS FROM A SINGLE LAB VISIT Definition of Single-Time Health Risk: Predicting patients’ future health risks is a challenging task, especially when using only a single lab visit. As described in §2, all current models use multiple assessments for predicting health risks of a patient; however, these approaches ignore a large portion of the population who do not return for additional check-ups. Motivated by the definition of risk in other fields (e.g. risk of re-identification of anonymized data (Ito & Kikuchi, 2018)), we provide a simple and intuitive distance-based definition of health risk to address the mentioned issues and that is well suited for DML embeddings. Given the simplicity of our definition and due to space constraints, we describe the definition below and outline the mathematical framework in Appendix B. We define the health distance as the euclidean distance between a subject and the reference bona fide healthy (BFH) subject. Many studies have shown the large discrepancy of lab metrics among different age groups and genders (Cohen et al., 2021). To account for these known differences, we identify a reference vector which is the median BFH subject from each age group per gender. Moreover, for simplicity and interpretability, we define health risk as discrete groups using the known BFH population: For each stratification g (age and sex), we identify two BFH subjects who are at the 2.5 and 97.5 percentiles (giving us the inner 95% of the distribution), and calculate their distance to the corresponding reference vector, This gives us a distance interval [tg2.5, t g 97.5]. In a group g, any new patient whose distance to the reference vector falls inside the corresponding [tg2.5, t g 97.5] is considered to be "Normal". Similarly, we identify the [tg1, t g 99] intervals (corresponding to the inner 98% of BFH group); any new patient whose health distance is within [tg1, t g 99] but not in [t g 2.5, t g 97.5] is considered to be in the "Lower Risk" (LR) group. Lastly, any patient with health distance outside of these intervals is considered to have "Higher Risk" (HR). The UKB data is a good candidate for predicting potential health risks, given that it includes subsequent follow-ups where a subset of patients are invited for a repeat assessment. The first follow-up was done between 2012-2013 and it included approximately 20,000 individuals (Littlejohns et al., 2019) (25× reduction with many measurements missing). Based on our goal of predicting risk from a single visit, we only include the patients’ first visits for modeling, and use the 2012-2013 follow-ups for evaluating the predictions. Results: We utilized the single-time health risk definition to predict a patient’s future potentials of health complications. To demonstrate the versatility of our approach, we predicted general health risks that were used for all health conditions available (namely Cancer, Diabetes and Other Serious Conditions); however, we hypothesize that constraining health risks based on specific conditions will improve risk predictions. We considered five methods for assigning a risk group to each patient: (i) Euclidean distance on raw data (preprocessed but not transformed), (ii) Mahalanobis distance on pre-processed data, (iii) Euclidean distance on the key metrics (P0) for the available conditions(described in section 5.1), therefore hand-crafting features and reducing dimensionality in order to achieve an upper bound performance for most traditional methods (though this will not be possible for all diseases). The last two methods we consider are deep representation learning methods: (iv) DeepPatient and (v) SPHR embeddings (proposed model). We use the Euclidean metric for calculating the distance between deep-learned representations of (iv) and (v). For all approaches, we assigned all patients to one of the three risk groups using biosignals from a single visit, and calculated the percentage of patients who developed a condition in the immediate next visit. Intuitively, patients who fall under the "Normal" group should have fewer confirmed cases compared to subjects in "Lower Risk" or "Higher Risk" groups. Table 4 shows the results for the top three methods, with our approach consistently matching the intuitive criteria: among the five methods, SPHR-predicted patients in the Normal risk group have the fewest instance of developing future conditions, while the ones predicted as High risk have the highest instance of developing future conditions (Table 4). 6 CONCLUSION AND DISCUSSION We present a simple and intuitive variation of the traditional triplet loss (NPLB) which regularizes the distance between positive and negative samples based on the distance between anchor and negative pairs. To show the general applicability of our methods and as an initial validation step, we tested our model on three standard benchmarking datasets (MNIST, Fashion MNIST and CIFAR-10) and found that our NPLB-trained model produced better embeddings. To demonstrate the real world impact of DML, such as our proposed framework called SPHR, we applied our methodology on the UKB to classify patients and predict future health risk for the current healthy patients, using only a single time point. Motivated by risk prediction in other domains, we provide a distance-based definition of health risk. Utilizing this definition, we partitioned patients into three health risk groups (Normal, Lower Risk and Higher Risk). Among all methods, SPHR-predicted Higher Risk healthy patients had the highest percentage of actually developing conditions in the next visit, while SPHR-predicted Normal patients had the lowest instances, which is desired. Although the main point of our work focused on modifying the objective function, a limitation of our work is the simple triplet sampling that we employed, particularly when applied to healthcare. We anticipate additional improvement gains by employing online triplet sampling or extending our work to be self-supervised (Chen et al., 2020b; Oord et al., 2018b; Wang et al., 2021). The implications of our work are threefold: (1) Our proposed objective has the potential of improving existing triplet-based models without requiring additional sample mining or computationally intensive operations. We anticipate that combining our work with existing triplet sampling can further improve model learning and results. (2) Models for predicting patients’ health risks are nascent and often require time-series data. Our experiments demonstrated the potential improvements gained by developing DML-based models for learning patient embeddings, which in turn can improve patient care. Our results show that more general representation learning models are valuable in pre-processing EHR data and producing deep learned embeddings that can be used (or fine-tuned) for more specific downstream analyses. We believe additional analysis on the learned embedding space can prove to be useful for various tasks. For example, we show that there exists a relationship between distances in the embedded space and the time to develop a condition, which we present in Appendix C. The rapid growth of healthcare data, such as EHR, necessitates the use of large-scale algorithms that can analyze such data in a scalable manner (Evans, 2016). Currently, most applications of ML in healthcare are formulated for small-scale studies with proprietary data, or use the publicly-available MIMIC dataset (Miotto et al., 2016; Johnson et al., 2016), which is not as large-scale and complex as the UKB. Similar to our work, we believe that future DL models can benefit from using the UKB for development and benchmarking. (3) Evaluating health risk based on a single lab visit can enable clinicians to flag high-risk patients early on, potentially reducing the number (and the scope) of costly tests and significantly improving care for the most vulnerable individuals in a population. REPRODUCIBILITY Our code package and tutorial notebooks are all publicly available at on the authors’ Github at: <revealed after the double blind reviews>, and we will actively monitor the repository for any issues or improvements suggested by the users. Moreover, we have designed our Appendix to be a comprehensive guide for reproducing our results and experiments as well. Our Appendix includes all used features from the UK Biobank (for male and female patients), a complete list of model parameters (for classification models), detailed definition of single-time health risk as well as pseudocode and description of architectures we developed for our experiments. ACKNOWLEDGMENTS Will be added after double blind reviews. Appendix APPENDIX A RUNTIME AND SCALING ANALYSIS OF SPHR To measure the scalability of SPHR, we generated a random dataset consisting of 100,000 samples with 64 linearly-independent (also known as "informative") features and 10 classes, resulting in a matrix X ∈ R100000×64. From this dataset, we then randomly generated varying number of triplets following the strategy described in the main manuscript and in Hoffer & Ailon (2015), representing a computationally intensive case (as opposed to more intricate and faster mining schemes). The average of five mining runs is presented as "Avg. Mining Time" column in Table A1. Next, we trained SPHR on the varying number of triplets five times using (1) A Google Compute Engines with 48 logical cores (CPU) (2) A Google Virtual Machine equipped with one NVIDIA V100 GPU (referred to as GPU). The average training times for CPUs and GPUs are shown in Table A1. Table A1: Average training time of the proposed deep learning model (SPHR). All experiments are done under the same computational The times shown below are the average of 5 runs with all identical settings. Number of Triplets Avg. CPU Training Time Avg. GPU Training Time Avg. Mining Time 1,000 8.28 Mins 3.95 Mins <0.01 Mins 5,000 15.07 Mins 5.96 Mins 0.01 Mins 10,000 31.02 Mins 7.60 Mins 0.12 Mins 50,000 76.24 Mins 16.79 Mins 0.65 Mins 100,000 133.26 Mins 27.15 Mins 1.27 Mins 500,000 512.75 Mins 94.79 Mins 5.58 Mins APPENDIX B MATHEMATICAL DEFINITION OF SINGLE-TIME HEALTH RISK In this section, we aim to provide a mathematical definition of health risk that can measure similarity between a new cohort and an existing bona fide healthy population without requiring temporal data. This definition is inspired by the formulation of risk in other domains, such as defining the risk of re-identification of anonymized data (Ito & Kikuchi, 2018). We first define the notion of a "health distance", and use that to formulate thresholds intervals which enable us to define health risk as a mapping between continuous values and discrete risk groups. Definition Appendix B.1. Given a space X , a bona fide healthy population distribution B and a new patient p (all in X), the health score sn of p is: sq(p) = d(Pq(B), p) where Pq(H) denotes the qth percentile of B, and d(·) refers to a metric defined on X (potentially a pseudo metric). Definition Appendix B.1 provides a measure of distance between a new patient and an existing reference population, which can be used to define similarity. For example, if d(·) is the Euclidean distance, then the similarity between patient x and y is sim(x, y) = 11+d(x,y) . Next, using this notion of distance (and similarity), we define risk thresholds that allow for grouping of patients. Definition Appendix B.2. A threshold interval Iq = [tlq, tuq ] is defined as the distance between the vectors providing the inner q percent of a distribution H and the median value. Let n = (100− q)/2, then we have Iq as: Iq = [t l q, t u q ] = [d(Pn(H), P50(H)), d(Pq+n(H), P50(H))] . Additionally, for the sake of interpretability and convenience, we can define health risk groups using known Iq values which we define below. Definition Appendix B.3. Let M : Rd → V ∪ {η} where d denotes the number of features defining a patient, with η being discrete group and V denoting pre-defined risk groups based on k ∈ N many Figure A1: Qualitative assessment of our approach on UKB female patients. The NPLB-trained network better separates subjects, resulting in significant improvements in classification of patients, as shown in Table 3. Note the continuum of patients for both metric learning techniques, especially among the apparently-healthy patients (in yellow). intervals (i.e. |V | = |Iq| = k). Using the same notion of health distance sn and threshold interval Iq as before, we define a patient p’s health risk group as: M(p) = { Vq if sq(p) ∈ Iq η otherwise . We utilize our deep metric learning model and the definitions above in tandem to predict health risks. That is, we first produce embeddings for all patients using our learned nonlinear operator G, and then use the distance between the bona fide healthy (BFH) population and new patients to assign them a risk group. Mathematically, given the set of BFH population for all groups, i.e. BAll = {B[36,45], B[46,50], B[51,55], B[56,60], B[61,65], B[66,75]}, we take the reference value per age group to be r̃age = G(P50(Bage)), where Bage denotes the BFH population for the age group age. Then, using definition Appendix B.2, we define: Nage = I age 95 = [d(P2.5(Bage), P50(Bage)), d(P97.5(Bage), P50(Bage))] (A.4a) LRage = I age 98 = [d(P1(Bage), P50(Bage)), d(P99(Bage), P50(Bage))] . (A.4b) Note that Nage ⊂ LRage. Lastly, using the intervals defined in Eq. (A.4), we define the mapping M as shown in Eq. (A.5). Mage(p) = Normal if sn(G(p)) = d(G(p), r̃age) ∈ Nage Lower Risk if sn(G(p)) = d(G(p), r̃age) ∈ LRage\Nage Higher Risk otherwise . (A.5) APPENDIX C PREDICTING PATIENT’S HEALTH RISK IN TIME Given the performance of our DML model in classifying subjects and health risk assessment, we hypothesized that we can retrieve a relationship between spatial distance (in the embedded space) and a patient’s time to develop a condition using only a single lab visit. Let us assume that subjects start from a "healthy" point and move along a trajectory (among many trajectories) to ultimately become "unhealthy" (similar to the principle of entropy). In this setting, we hypothesized that our model maps patients in space based on their potential of moving along the trajectory of becoming unhealthy. To test our hypothesis, we designed an experiment to use our distance-based definition of health risk, and the NPLB embeddings to further stratify patients based on the immediacy of their health risk (time). More specifically, we investigated the correlation between spatial locations of the currently-healthy patients in the embedded space and the time to which they develop a condition. Similar to the health risk prediction experiment in the main manuscript, we computed the distance between each patient’s embedding and the corresponding reference value in the ultra healthy population (refer to the main manuscript for details on this procedure). We then extracted all healthy patients at the time of first visit who returned in (2012-2013) and 2014 (and after) for reassessment or imaging visits. It is important to again note that there is a significant drop in the number of returning patients for subsequent visits. Among the retrieved patients, we calculated the number of individuals who developed Cancer, Diabetes or Other Serious Conditions. We found strong negative correlations between the calculated health score (distances) and time of diagnosis for Other Serious Conditions (r = −0.72, p = 0.00042) and Diabetes (r = −0.64, p = 0.0071), with Cancer having the least correlation r = −0.20 and p = 0.025. Note that the lab tests used as predictors are associated with diagnosing metabolic health conditions and less associated with diagnosing cancer, which could explain the low correlation between health score and time of developing cancer. These results indicate that the metric learned by our model accounts for the immediacy of health risk, mapping patient’s who are at a higher risk of developing health conditions farther from those who are at a lower risk (hence the negative correlation). APPENDIX D : NPLB CONDITION IN MORE DETAIL In this section, we aim to take a closer look at the minimizer of our proposed objective. Using the same notations as in the main manuscript, we define the following variables for convenience: δ+ , d(φa, φp) δ− , d(φa, φn) ρ , d(φp, φn) With this notation, we can rewrite our proposed No Pairs Left Behind objective as: LNPLB = 1 N N∑ (pi,ai,ni)∈T [δ+ − δ− + 0]+ + (ρ− δ−)2 . (A.7) Note that the since [δ+ − δ− + 0]+ ≥ 0,LNPLB = 0 if and only the summation of each term is identically zero. This yields the following relation: − (ρ− δ−)2 = [δ+ − δ− + 0]+ (A.8) which, considering the real solutions, is only valid if ρ = δ−, and if δ− > δ+ + 0, and therefore ρ > δ+ + 0. As a result, the regularization term enforces that the distance between the positive and the negative to be at least as much δ+ + 0, leading to denser clusters that are better separated from other classes in space. NPLB can be very easily implemented using existing implementations in standard libraries. As an example, we provide a Pytorch-like pseudo code showing the implementation of our approach: 1 from typing import Callable, Optional 2 import torch 3 4 class NPLBLoss(torch.nn.Module): 5 6 def __init__(self, 7 triplet_criterion: torch.nn.TripletMarginLoss, 8 metric: Optional[Callable[ 9 [torch.Tensor, torch.Tensor], 10 torch.Tensor]] = torch.nn.functional.pairwise_distance ): 11 """Initializes the instance with backbone triplet and distance metric .""" 12 super().__init__() 13 self.triplet = triplet_criterion 14 self.metric = metric 15 16 def forward(self, anchor: torch.Tensor, positive: torch.Tensor, 17 negative: torch.Tensor) -> torch.Tensor: 18 """Forward method of NPLB loss.""" 19 20 # Traditional triplet as the first component of the loss function. 21 triplet_loss = self.triplet(anchor, positive, negative) 22 # This 23 positive_to_negative = self.metric(positive, negative, keepdim=True) 24 anchor_to_negative = self.metric(anchor, negative, keepdim=True) 25 # Here we use the reduction to be ’mean’, but it can be any kind 26 # that the DL library would support. 27 return triplet_loss + torch.mean( 28 torch.pow((positive_to_negative - anchor_to_negative), 2)) Listing 1: Pytorch implementation of NPLB. APPENDIX E LIFTEDSTRUCT, N-PAIRS LOSS, AND INFONCE In this section, we provide a brief description of three popular deep metric learning models that are related to our work. We also describe the implementations used for these models, and present a complete comparison of all methods on all datasets in this work. In LiftedStruct (Song et al., 2015), the authors propose to take advantage of the full batch for comparing pairs as opposed to traditional approaches where positive and negative pairs are pre-defined for an anchor. The authors describe their approach as "lifting" the vector of pairwise distances for each batch to the matrix of pairwise distances. N-Pair Loss (Sohn, 2016) is a generalization of the traditional triplet loss which aims to address the "slow" convergence of traditional triplet models through considering N − 1 negative examples instead of the one negative pair considered in the traditional approach. InfoNCE (Oord et al., 2018a) is a generalization of N-pair loss that is also known as the normalized temperature-scaled cross entropy loss (NT-Xent). This loss aims to maximize the agreement between positive samples. Both N-Pair and InfoNCE loss relate to our work due to their formulation of the metric learning objective that are closely related to Triplet loss. To compare our approach against these algorithms, we leveraged the widely used Pytorch Metric Learning (PML) package (Musgrave et al., 2020). The complete results of our comparisons on all tested datasets are shown below in Table A2. Table A2: Comparison of state-of-the-art (SOTA) triplet losses with our proposed objective function (complete version of Tables 1 and 3 of the main manuscript). The classifications were done on the embeddings using XGBoost for five different train-test splits, with the average Weighted F1 score reported below. We note that the improved performance of the NPLB-trained model was consistent across different classifiers. The UKB results are for the multi-class classification. MNIST FashionMNIST CIFAR10 UKB (Females) UKB (Males) Trad. Triplet Loss 0.9859 ± 0.0009 0.9394 ± 0.001 0.8036 ± 0.028 0.5874 ± 0.001 0.6861 ± 0.003 N-Pair 0.9863 ± 0.0003 0.9586 ± 0.003 0.7936 ± 0.034 0.6064 ± 0.004 0.6961 ± 0.002 LiftedStruct 0.9853 ± 0.0007 0.9495 ± 0.002 0.7946 ± 0.041 0.5994 ± 0.003 0.6989 ± 0.004 MDR 0.9886 ± 0.0003 0.9557 ± 0.003 0.8152 ± 0.027 0.6047 ± 0.005 0.6964 ± 0.002 InfoNCE 0.9858 ± 0.0002 0.9581 ± 0.004 0.8039 ± 0.026 0.6103 ± 0.002 0.6816 ± 0.003 Distance Swap 0.9891 ± 0.0003 0.9536 ± 0.001 0.8285 ± 0.022 0.5416 ± 0.004 0.6628 ± 0.002 NPLB (Ours) 0.9954 ± 0.0003 0.9664 ± 0.001 0.8475 ± 0.025 0.6642 ± 0.002 0.7845 ± 0.003 Figure A2: Visualization of the data processing scheme described in Section Appendix F. APPENDIX F UK BIOBANK DATA PROCESSING Given the richness and complexity of the UKB and the scope of this work, we subset the data to include patients’ age and gender (demographics), numerous lab metrics (objective features), Metabolic Equivalent Task (MET) scores for vigorous/moderate activity and self-reported hours of sleep (lifestyle) (complete list of features in Appendix M). Additionally, we leverage doctor-confirmed conditions as well as current medication to assess subjects’ health (assigning labels and not used as predictors). After selecting these features, we use the following scheme to partition the subjects (illustrated in Fig. A2): 1. Ensure all features are at least 75% complete (i.e. at least 75% of patients have a non-null value for that feature) 2. Exclude subjects with any null values 3. Split resulting data according to biological sex (male or female), and perform quantile normalization (as in Cohen et al. (2021)) 4. For each sex, partition patients into "unhealthy" (those who have at least one doctorconfirmed health condition or take medication for treating a serious condition) and "apparently healthy" population (who do not have any serious health conditions and do not take medications for treating such illnesses). This data is used for training our neural network. 5. Split patients into six different age groups: Each age group is constructed so that the number of patients in each group is on the same order, while the bias in the data is preserved (age groups are shown in Fig. A2). These age groups are used to determine age-specific references at the time of risk prediction. APPENDIX G : SIMILARITY OF DISTRIBUTIONS FOR KEY METRICS AMONG PATIENTS Although lab metric ranges seem very different at a first glance, look at the age-stratified ranges of tests show similarity among the apparently healthy and unhealthy patients. Additionally, if we further stratify the data based on lifestyle, the similarities between the two health groups becomes even more evident. The additional filtering is as follows: We identify the median sleep hours per group as well as identifying "active" and "less active" individuals. We define active as someone who is moderately active for 150 minutes or vigorously active for 75 minutes per week. We use the additional filtering to further stratify patients in each age group. Below in Fig. A3 and A4 we show examples of these similarities for two age groups chosen at random. These results motivated our approach in identifying the bona fide healthy population to be used as reference points. Figure A3: Distribution similarity of key lab metrics between apparently-healthy and unhealthy female patients. We present the violin plot for Total Cholestrol (left) and LDL Cholestrol (right) for patients between the ages of 36-45 (chosen at random). This figure aims to illustrate the similarity between these distributions based on lifestyle and age. That is, by stratifying the patients based on their sleep and activity we can see that health status alone can not separate the patients well, given the similarity in the signals. APPENDIX H NORMAL RANGES FOR KEY METRICS Below we provide a list of the current "normal" lab ranges for key metrics that determined the bona fide healthy population: Key Biomarker Gender Specific? Range for Males Range for Females Reference Total Cholesterol No ≤ 5.18 mmol L ≤ 5.18 mmol L Link to Reference 1, Link to Reference 2 HDL Yes ≥ 1 mmol L ≥ 1.3 mmol L Link to Reference 1, Link to Reference 2 LDL No ≤ 3.3 mmol L ≤ 3.3 mmol L Link to Reference 1, Link to Reference 2 Triglycerides No ≤ 1.7 mmol L ≤ 1.7 mmol L Link to Reference 1, Link to Reference 2 Fasting Glucose No ∈ [70, 100] mg dL ∈ [70, 100] mg dL Link to Reference 3 HbA1c No < 42 mmol mol < 42 mmol mol Link to Reference 4 C-Reactive Protein No < 10 mg L < 10 mg L Link to Reference 5 APPENDIX I : PERFORMANCE OF SPHR ON MALE SUBJECTS In this section, we present the results of the experiments in the main manuscript (which were done for female patients) for the male patients. The classification results are presented in Tables A3, A4, A5, and A6, and the health risk predictions are shown in Table A7. Figure A4: Distribution similarity of key lab metrics between apparently-healthy and unhealthy female patients. We present the violin plot for HDL Cholestrol (left) and Triglycerides (right) for patients between the ages of 55-60 (age group chosen at random). This figure aims to illustrate the similarity between these distributions based on lifestyle and age. That is, by stratifying the patients based on their sleep and activity we can see that health status alone can not separate the patients well, given the similarity in the signals. Table A3: Comparison of binary classification performance (weighted F1 score) with various representations on the male patients. In this case, we consider the bona fide healthy patients as healthy patients and train each model to predict binary labels. We keep the same random seeds across different classifiers, and for the supervised methods, we randomly split the data into train and test (80-20) five time, and calculate the mean and standard deviation of the accuracies. Our model significantly improves the classification all tested classifiers, demonstrating better separability in space compared to raw data and state-of-the-art method (DeepPatient). Model Not-Transformed ICA PCA DeepPatient SPHR (Ours) KNNs 0.6200 ± 0.004 0.6167 ± 0.003 0.6077 ± 0.003 0.6224 ± 0.001 0.8163 ± 0.002 LDA 0.6275 ± 0.005 0.6227 ± 0.004 0.6227 ± 0.003 0.6385 ± 0.002 0.8141 ± 0.002 NN for EHR 0.5926 ± 0.014 0.6301 ± 0.018 0.6105 ± 0.021 0.6148 ± 0.032 0.8092 ± 0.011 XGBoost 0.5975 ± 0.004 0.5804 ± 0.004 0.6157 ± 0.004 0.6101 ± 0.004 0.8160 ± 0.003 APPENDIX J EFFECTS OF AUGMENTATION ON DML In order to evaluate the effect of augmenting the bona fide population and to determine the appropriate increase fold, we trained SPHR with different levels of augmentation, and evaluated the effect of each increase fold through multi-label classification performance. More specifically, created new augmented datasets with no augmentation, 1×, 3×, 5× and 10× and generated the same number of triplets (as described previously) and trained SPHR. We evaluated the multi-label classification using the same approach and classifiers as before (described in the main manuscript) and present the results of XGBoost classification in Table A8. Based on our findings and considerations for computational efficiency, we chose 3x augmentation as the appropriate fold increase. Table A4: Comparison of binary classification performance (micro F1 score) with various representations on the male patients. In this case, we consider the bona fide healthy patients as healthy patients and train each model to predict binary labels. We keep the same random seeds across different classifiers, and for the supervised methods, we randomly split the data into train and test (80-20) five time, and calculate the mean and standard deviation of the accuracies. Our model significantly improves the classification all tested classifiers, demonstrating better separability in space compared to raw data and state-of-the-art method (DeepPatient). Model Not-Transformed ICA PCA DeepPatient SPHR (Ours) KNNs 0.6490 ± 0.004 0.6480 ± 0.004 0.6469 ± 0.003 0.6639 ± 0.001 0.8185 ± 0.002 LDA 0.6529 ± 0.004 0.6509 ± 0.002 0.6509 ± 0.002 0.6664 ± 0.003 0.8138 ± 0.002 NN for EHR 0.6345 ± 0.003 0.6419 ± 0.003 0.6380 ± 0.003 0.6527 ± 0.005 0.8176 ± 0.004 XGBoost 0.6573 ± 0.002 0.6488 ± 0.003 0.6469 ± 0.003 0.6701 ± 0.003 0.8180 ± 0.003 Table A5: Comparison of multi-label classification accuracy (weighted F1 score) with various representations on the male patients. We keep the same random seeds across different classifiers, and for the supervised methods, we randomly split the data into train and test (80-20) five time, and calculate the mean and standard deviation of the accuracies. Our model significantly improves the classification for all tested classifiers, demonstrating better separability in space compared to raw data and state-of-the-art method (DeepPatient). Model Not-Transformed PCA ICA DeepPatient SPHR (Ours) KNNs 0.5852 ± 0.005 0.5820 ± 0.005 0.5734 ± 0.003 0.5834 ± 0.002 0.7819 ± 0.001 LDA 0.6011 ± 0.004 0.5953 ± 0.003 0.5952 ± 0.002 0.6080 ± 0.003 0.7865 ± 0.002 NN for EHR 0.5926 ± 0.004 0.5925 ± 0.004 0.5838 ± 0.003 0.5918 ± 0.001 0.7884 ± 0.005 XGBoost 0.5439 ± 0.005 0.5583 ± 0.004 0.5587 ± 0.005 0.5896 ± 0.003 0.7845 ± 0.003 APPENDIX K ADDITIONAL DETAILS ON MODEL ACRCHITECTURES K.1 SPHR’S NEURAL NETWORK Figure A5: Architecture of SPHR. Our neural network is composed of three hidden layers, probablistic dropouts (p = 0.1) and nonlinear activations (PReLU) in between. In the figure above, b, n denote to the number of patients and features, respectively, with d being the output dimension (in our case d = 32). For readability and reproducibility purposes, we also include a Pytorch snippet of the network used for learning representations from the UK Biobank: 1 import torch.nn as nn 2 3 class SPHR(nn.Module): 4 def __init__(self, input_dim:int = 64, output_dim:int= 32): 5 self.inp_dim = input_dim 6 self.out_dim = output_dim 7 super().__init__() Table A6: Comparison of multi-label classification accuracy (micro F1 score) with various representations on the male patients. We keep the same random seeds across different classifiers, and for the supervised methods, we randomly split the data into train and test (80-20) five time, and calculate the mean and standard deviation of the accuracies. Our model significantly improves the classification for all tested classifiers, demonstrating better separability in space compared to raw data and state-of-the-art method (DeepPatient). Model Not-Transformed PCA ICA DeepPatient SPHR (Ours) KNNs 0.6364 ± 0.004 0.6355 ± 0.004 0.6358 ± 0.004 0.6358 ± 0.001 0.7921 ± 0.001 LDA 0.6405 ± 0.003 0.6393 ± 0.002 0.6392 ± 0.003 0.6383 ± 0.003 0.7811 ± 0.002 NN for EHR 0.6345 ± 0.003 0.6342 ± 0.003 0.6342 ± 0.004 0.6438 ± 0.001 0.7930 ± 0.003 XGBoost 0.6409 ± 0.003 0.6380 ± 0.003 0.6384 ± 0.003 0.6403 ± 0.002 0.7940 ± 0.003 Table A7: The percentage of apparently-healthy male patients who develop conditions in the next immediate visit within each predicted risk group. Among all methods (top three shown), SPHR-predicted Normal and High risk patients developed the fewest and most conditions, respectively, as expected. P0 (Not-Transformed) DeepPatient SPHR (Ours) Future Diagnosis Normal LR HR Normal LR Higher Risk Normal LR HR Cancer 2.76% 0.62% 2.75% 2.86% 0.33% <0.1% 1.52% 2.96% 4.14% Diabetes 1.88% 1.94% 1.63% 0.85% 0.73% 1.62% 0.73% 0.55% 5.29% Other Serious Cond 9.57% 6.44% 9.47% 9.33% 4.41% 7.28% 2.73% 8.60% 12.45% 8 self.nonlinear_net = nn.Sequential( 9 nn.Linear(self.inp_dim,512), 10 nn.Dropout(p=0.1), 11 nn.PReLU(), 12 nn.Linear(512, 256), 13 nn.Dropout(p=0.1), 14 nn.PReLU(), 15 nn.Linear(256,self.out_dim), 16 nn.PReLU() 17 ) 18 19 def forward_oneSample(self, input_tensor): 20 # useful for the forward method call and for inference 21 return self.nonlinear_net(input_tensor) 22 23 def forward(self, positive, anchor, negative): 24 ## forward method for training 25 return self.forward_oneSample(positive), self.forward_oneSample( anchor), self.forward_oneSample(negative) Listing 2: SPHR’s network architecture We train SPHR by minimizing our proposed NPLB objective, Eq.(3), using the Adam optimizer for 1000 epochs with lr = 0.001, and employ an exponential learning rate decay (γ = 0.95) to decrease the learning rate after every 50 epochs. We set the margin hyperparameter to be ◦ = 1. In all experiments, the triplet selection was done in an offline manner using the most common triplet selection scheme (e.g. see https://www.kaggle.com/code/hirotaka0122/tripletloss-with-pytorch?scriptVersionId=26699660&cellId=6). K.2 CIFAR-10 EMBEDDING NETWORK AND EXPERIMENTAL SETUP To further demonstrate the improvements of NPLB on representation learning, we benchmarked various triplet losses on CIFAR10 as well. For this experiment, we trained a randomly-initialized VGG13 (Simonyan & Zisserman, 2015) model (not pre-trained) on CIFAR10 to produce embeddings m ∈ R128 for 200 epochs, using the Adam optimizer with lr = 0.001 and a decaying schedule (similar to SPHR’s optimization setting, as described in the main manuscript). We note that the architecture used in this experiment is identical to a traditional classification VGG model, with the difference being in the "classification" layer which is re-purposed for producing 128-dimensional embeddings. CIFAR-10 images were normalized using the standard CIFAR-10 transformation, and were not agumented during trainnig (i.e. we did not use any augmentations in training the model). Table A8: Studying the effects of different augmentation levels on classification as a proxy for all downstream tasks. We followed the same procedure as all other classification experiments (including model parameters). The results for 1× augmentation are omitted since the results are very similar to no augmentation. No Augmentation 3× 5× 10× Females: Multi-Label 0.5730 ± 0.002 0.6642 ± 0.002 0.6619 ± 0.003 0.6584 ± 0.003 Males: Multi-Label 0.6247 ± 0.005 0.7845 ± 0.003 0.7852 ± 0.004 0.7685 ± 0.002 K.3 MNIST EMBEDDING NETWORK For ease of readability and reproducibility, we provide the architecture used for MNIST as a Pytorch snippet: 1 import torch.nn as nn 2 3 class MNIST_Network(nn.Module): 4 def __init__(self, embedding_dimension=2): 5 super().__init__() 6 self.conv_net = nn.Sequential( 7 nn.Conv2d(1, 32, 5), 8 nn.PReLU(), 9 nn.MaxPool2d(2, stride=2), 10 nn.Dropout(0.3), 11 nn.Conv2d(32, 64, 5), 12 nn.PReLU(), 13 nn.MaxPool2d(2, stride=2), 14 nn.Dropout(0.3) 15 ) 16 17 self.feedForward_net = nn.Sequential( 18 nn.Linear(64*4*4, 512), 19 nn.PReLU(), 20 nn.Linear(512, embedding_dimension) 21 ) 22 23 def forward(self, input_tensor): 24 conv_output = self.conv_net(input_tensor) 25 conv_output = conv_output.view(-1, 64*4*4) 26 return self.feedForward_net(conv_output) Listing 3: Network architecture used for validation on MNIST. We train the network for 50 epochs using the Adam optimizer with lr = 0.001. We set the margin hyperparameter to be ◦ = 1. K.4 FASHION MNIST EMBEDDING NETWORK For readability and reproducibility purposes, we provide the architecture used for Fashion MNIST as a Pytorch snippet: 1 import torch.nn as nn 2 3 class FMNIST_Network(nn.Module): 4 def __init__(self, embedding_dimension=128): 5 super().__init__() 6 self.conv_net = nn.Sequential( 7 nn.Conv2d(in_channels=1, out_channels=16, kernel_size=3), 8 nn.PReLU(), 9 nn.MaxPool2d(2, stride=2), 10 nn.Dropout(0.1), 11 nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5), 12 nn.PReLU(), 13 nn.MaxPool2d(2, stride=1), 14 nn.Dropout(0.2), 15 nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5), 16 nn.AvgPool2d(kernel_size=1), 17 nn.PReLU() 18 ) 19 20 self.feedForward_net = nn.Sequential( 21 nn.Linear(64*4*4, 512), 22 nn.PReLU(), 23 nn.Linear(512, embedding_dimension) 24 ) 25 26 def forward(self, input_tensor): 27 conv_output = self.conv_net(input_tensor) 28 conv_output = conv_output.view(-1, 64*4*4) 29 return self.feedForward_net(conv_output) Listing 4: Network architecture used for validation on Fashion MNIST. We train the network for 50 epochs using the Adam optimizer with lr = 0.001. We set the margin hyperparameter to be ◦ = 1. K.5 CLASSIFICATION MODELS The parameters for NN for EHR were chosen based on Chen et al. (2020a). The "main" parameters for KNN and XGBoost were chosen through a randomized grid search while parameters for LDA were unchanged. We specify the parameters that were identified through grid search in the KNN and XGBoost sections. K.5.1 NN FOR EHR We follow the work of Chen et al. (2020a) and construct a feed forward neural network with the additive attention mechanism in the first layer. As in Chen et al. , we choose the learning rate to be lr = 0.001 with an L2 penalty coefficient λ = 0.001, and train the model for 100 epochs. K.5.2 KNNS We utilized the Scikit-Learn implementation of K-Nearest Neighbors. The optimal number of neighbors were found with grid search from 10 to 100 neighbors (increasing by 10). For the sake of reproducibility, we provide the parameters with scikit-learn terminology. For more information about the meaning of each parameter (and value), we refer the reviewers to the online documentation: https://scikit-learn.org/stable/modules/generated/sklearn.neighbo rs.KNeighborsClassifier.html. • Algorithm: Auto • Leaf Size: 30 • Metric: Minkowski • Metric Params: None • n Jobs: -1 • n Neighbors: 50 • p: 2 • Weights’: Uniform K.5.3 LDA We employed the Scikit-Learn implementation of Linear Discriminant Analysis (LDA), with the default parameters. K.5.4 XGBOOST We utilized the official implementation of XGBoost, located at: https://xgboost.readthed ocs.io/en/stable/. We optimized model performance through grid search for learning rate (0.01 to 0.2, increasing by 0.01), max depth (from 1 to 10, increasing by 1) and number of estimators (from 10 to 200, increasing by 10). For the sake of reproducibility, we provide the parameters the nomenclature used in the online documentation. • Objective: Binary-Logistic • Use Label Encoder: False • Base Score: 0.5 • Booster: gbtree • Callbacks: None, • colsample_by_level:1 • colsample_by_node:1 • colsample_by_tree":1 • Early Stopping Rounds: None • Enable Categorical: False • Evaluation metric: None • γ (gamma): 0 • GPU ID: -1 • Grow Policy: depthwise’ • Importance Type: None • Interaction Constraints: " " • Learning Rate: 0.05 • Max Bin: 256 • Max Categorical to Onehot: 4 • Max Delta Step: 0 • Max Depth: 4 • Max Leaves: 0 • Minimum Child Weight: 1 • Missing: NaN • Monotone Constraints: ’()’ • n Estimators: 50 • n Jobs: -1 • Number of Parallel Trees: 1 • Predictor: Auto • Random State: 0 • reg_alpha:0 • reg_lambda:1 • Sampling Method: Uniform • scale_pos_weight:1 • Subsample: 1 • Tree Method: Exact • Validate Parameters: 1 • Verbosity: None Algorithm 1 Proposed Augmentation of Electronic Health Records Data. The proposed strategy will ensure that each augmented feature falls between pre-determined ranges for the appropriate gender and age group, which are crucial in diagnosing conditions. Require: Xdict: A mapping between gender/age condition groups to raw bloodwork and lifestyle matricies Require: condlist: A list of all present conditions # e.g. bona fide healthy, diabetic, etc. Require: U : A matrix storing upper bounds for featurej given conditioni Require: L: A matrix of the lower bounds for featurej given conditioni 1: X̃dict ← Zeros(Xdict) 2: for conditioni in condlist, do 3: for featurej in Xdict[conditioni], do 4: µ←Mean(featurej) 5: σ ← STD(featurej) # Standard deviation 6: z ← -1016 # initialize 7: while z 6∈ [Lij , Uij ], do 8: z←∼ N (µ, σ) # sampled value from the Gaussian distribution 9: end while 10: X̃dict[conditioni][featurej ]← z # augmented feature 11: end for 12: end for APPENDIX L DATA AUGMENTATION SCHEME APPENDIX M : COMPLETE LIST OF FEATURES M.1 UKB FID TO NAME MAPPINGS FOR FEMALE PATIENTS Lab Metrics 21003: Age 30160: Basophill count 30220: Basophill percentage 30150: Eosinophill count 30210: Eosinophill percentage 30030: Haematocrit percentage 30020: Haemoglobin concentration 30300: High light scatter reticulocyte count 30290: High light scatter reticulocyte percentage 30280: Immature reticulocyte fraction 30120: Lymphocyte count 30180: Lymphocyte percentage 30050: Mean corpuscular haemoglobin 30060: Mean corpuscular haemoglobin concentration 30040: Mean corpuscular volume 30100: Mean platelet (thrombocyte) volume 30260: Mean reticulocyte volume 30270: Mean sphered cell volume 30080: Platelet count 30110: Platelet distribution width 30010: Red blood cell (erythrocyte) count 30070: Red blood cell (erythrocyte) distribution width 30250: Reticulocyte count 30240: Reticulocyte percentage 30000: White blood cell (leukocyte) count 30620: Alanine aminotransferase 30600: Albumin 30610: Alkaline phosphatase 30630: Apolipoprotein A 30640: Apolipoprotein B 30650: Aspartate aminotransferase 307
1. What is the main contribution of the paper, and how does it relate to previous works in triplet loss and contrastive learning? 2. What are the strengths and weaknesses of the proposed method, particularly in its simplicity and lack of thorough analysis and benchmarking? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, especially regarding its notation and motivation? 4. Can you provide examples of recent works in triplet/contrastive losses that the paper could have explored but did not, such as Lifted Structured Loss, Multi-Class N-pair loss, Noise Contrastive Estimation, InfoNCE, and Circle Loss?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors introduce a simple modification to a metric-learning triplet loss by adding penalty term which encourages the distance between the anchor sample and negative sample to be similar to the distance between the positive sample and the negative sample. Strengths And Weaknesses Strengths The authors provide results for interesting applications of the method on healthcare data. Weaknesses The proposed idea is extremely simple. In essence, the submission uses the fact that a triplet contains two negative pairs: (a,n) and (p,n). In addition to the standard triplet loss, the authors add a loss that encourages the distance between the p and n pair (a negative pair) to be similar to the distance between the a and the n pair (another negative pair). Thus, the paper just adds an extra loss to the typical triplet loss that encourages all negative pairs in the dataset to be roughly equidistant. Given the extreme simplicity of this idea, in my opinion the paper is lacking: (A) a thorough analysis how it compares to other recent triplet/contrastive losses, and in what cases equidistant negative pairs are desirable, (B) a thorough benchmarking comparing to more related work on triplet and contrastive losses*, and (C) an evaluation on a much larger set of common (metric learning) benchmark datasets used in related contrastive/triplet methods, rather than just MNIST and FashionMNIST. *To just name a few methods for which I think comparisons are lacking: Lifted Structured Loss [1], Multi-Class N-pair loss [2], Noise Contrastive Estimation [3], InfoNCE [4], Circle Loss [5] [1] Oh Song, H., Xiang, Y., Jegelka, S., & Savarese, S. (2016). Deep metric learning via lifted structured feature embedding. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4004-4012). [2] Sohn, K. (2016). Improved deep metric learning with multi-class n-pair loss objective. Advances in neural information processing systems, 29. [3] Gutmann, M., & Hyvärinen, A. (2010, March). Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics (pp. 297-304). JMLR Workshop and Conference Proceedings. [4] Oord, A. V. D., Li, Y., & Vinyals, O. (2018). Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. [5] Sun, Yifan, et al. "Circle loss: A unified perspective of pair similarity optimization." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. Finally, the motivation via the definition of uniform in-class embeddings is unclear to me, in part because the notation is not well defined at the moment. I encourage the authors to elaborate more on this point. Notation issue: Rojas-Thomas & Santos (2021) define the local density (LD) over a point x_i in c_k. The authors in this submission define LD on c_k directly, without specifying if p_j is also in c_k or any data point in the dataset. Clarity, Quality, Novelty And Reproducibility The novelty of the method is very limited. Connections to other contrastive losses that make uses of more negative pairs are not explored by the authors. The results on MNIST and FashionMNIST should be easy to reproduce.
ICLR
Title Cross-Stage Transformer for Video Learning Abstract Transformer network has been proved efficient in modeling long-range dependencies in video learning. However, videos contain rich contextual information in both spatial and temporal dimensions, e.g., scenes and temporal reasoning. In traditional transformer networks, stacked transformer blocks work in a sequential and independent way, which may lead to the inefficient propagation of such contextual information. To address this problem, we propose a cross-stage transformer paradigm, which allows to fuse self-attentions and features from different blocks. By inserting the proposed cross-stage mechanism in existing spatial and temporal transformer blocks, we build a separable transformer network for video learning based on ViT structure, in which self-attentions and features are progressively aggregated from one block to the next. Extensive experiments show that our approach outperforms existing ViT based video transformer approaches with the same pre-training dataset on mainstream video action recognition datasets of Kinetics-400 (Top-1 accuracy 81.8%) and Kinetics-600 (Top-1 accuracy 84.0%). Due to the effectiveness of cross-stage transformer, our proposed method achieves comparable performance with other ViT based approaches with much lower computation cost (e.g., 8.6% of ViViT’s FLOPs) in inference process. As an independent module, our proposed method can be conveniently added on other video transformer frameworks. 1 INTRODUCTION Convolution neural network (CNN) has been successfully applied on computer vision tasks, such as classification (Krizhevsky et al. (2012); He et al. (2016)), detection (Girshick (2015); Ren et al. (2015)) and segmentation (He et al. (2017)). However, due to the limited receptive field, CNN lacks the ability of modeling long-range dependencies, which is an obstacle to capture the spatial and temporal contexts in video learning. To overcome this weakness, self-attention mechanism is introduced into CNN structure and obtains excellent performance (Wang et al. (2018); Guo et al. (2021)). Recently, convolution-free transformer structure consisting of self-attention layers (Vaswani et al. (2017)) is also investigated in vision domain (Dosovitskiy et al. (2020); Carion et al. (2020)). Transformer achieved extreme success in natural language processing (NLP) (Vaswani et al. (2017); Devlin et al. (2018); Yang et al. (2019); Dai et al. (2019)). The inherent similar requirement between video and language learning, i.e., capturing the long-range contextual information, makes people believe that it can also work for video tasks. The first attempt to apply pure transformer network for vision is Vision Transformer (ViT) (Dosovitskiy et al. (2020)), which aims at image classification. The input images are split into several patches, which are then linearly embedded into tokens for the transformer blocks. A classification head is attached at the top of these transformer blocks for final prediction. Bertasius et al. (Bertasius et al. (2021)) and Arnab et al. (Arnab et al. (2021)) extend the scheme to video learning by adding temporal transformer blocks. Pure transformer network shows comparable performance with CNN based methods, as well as the potential in vision domain. However, there are still uncertainties by processing video data in the way analogous to language. On one hand, video patches contain rich spatial and temporal contents, so that it is difficult to map them into precise semantic tokens like words. Thus, the correlations established by transformer blocks may lead to ambiguous semantics. This drawback becomes even worse for videos with complex scenes and actions. On the other hand, the absence of convolutions in a transformer network will damage local contexts capturing, so that the features built across transformer blocks may have inefficient information propagation. To tackle the aforementioned problems, we try to re-design the transformer blocks. Inspired by the empirically long-standing principle in CNN based approaches, i.e., features extracted from different stages can be fused together to improve learning (Lin et al. (2017a); He et al. (2017); Lin et al. (2017b); Redmon & Farhadi (2018)), we expect the cross-stage fusion can also help improve the performance of transformer. Based on above analysis, we propose a novel cross-stage transformer block which consists of crossstage self-attention (CSSA) and cross-stage feature aggregation module (FAM). The former aims to progressively enhance the self-attention maps by adding shortcuts between self-attentions from two consecutive transformer blocks. The later fuses the features from different stages to achieve better outputs. We then build up a separable spatial-temporal transformer network, in which spatial crossstage transformer and temporal cross-stage transformer are sequentially stacked. Extensive experiments show that, under the same conditions, i.e., base transformer structure and pre-training dataset, our approach outperforms existing ViT based video transformers on video action recognition tasks. Due to the effectiveness of cross-stage fusion, our method can achieve comparable performance to ViViT (Arnab et al. (2021)) with much fewer FLOPs in inference process. As a generic module, the cross-stage transformer can also be inserted into other transformer based frameworks. The contributions of this work can be summarized as follows: 1. A novel cross-stage transformer block, consisting of cross-stage self-attention module and cross-stage feature aggregation module, is proposed. Meanwhile, we also establish a separable cross-stage transformer network for video learning. 2. Extensive experiments are conducted to provide sufficient information for better understanding our approach, thereby provide an insight into the design of transformer in video learning. 3. Using the same pre-training dataset as existing transformer methods, our approach outperforms other ViT based video transformers and CNN methods on video action recognition datasets, including Kinetics-400 and Kinetics-600. It can also be added in other frameworks to promote the performance. 2 RELATED WORK Video action recognition. Extensive efforts have been put on video action recognition in recent years. The mainstream approaches usually utilize 2D or 3D based CNN for video feature extraction (Carreira & Zisserman (2017); Christoph & Pinz (2016); Tran et al. (2015); Ji et al. (2012); Tran et al. (2018); Simonyan & Zisserman (2014); Wang et al. (2016)). I3D (Carreira & Zisserman (2017)) is a representative of 3D based methods, which inflates 2D convolution layers into 3D to save the huge computational cost in pre-training 3D networks. Non-Local Neural Networks (Wang et al. (2018)) introduces self-attention into CNN, which can capture long-range dependencies and richer information of input video frames. Guo et al. (2021) proposes a separable self-attention network and achieve excellent performance on video action recognition. SlowFast (Feichtenhofer et al. (2019)) proposes a two-pathway network, using slow and fast temporal rates of video frames at the same time, in which features are fused from fast pathway into slow one. X3D (Feichtenhofer (2020)) explores different network settings based on SlowFast, and significantly boosts the performance. Recently, the research efforts are shifting to transformer based methods. Image transformer networks. Self-attention network (Vaswani et al. (2017)), also known as transformer, has achieved state-of-the-art performance in NLP domain (Vaswani et al. (2017); Devlin et al. (2018); Yang et al. (2019); Dai et al. (2019)). This success inspires more and more research efforts on applying transformer to computer vision tasks. ViT (Dosovitskiy et al. (2020)) and DeiT (Touvron et al. (2020)) successfully show that pure transformer network can achieve state-of-the-art performance in image classification task. In Carion et al. (2020), a transformer-based network is proposed for object detection, and obtains comparative performance with Faster-RCNN (Ren et al. (2015)). SETR (Zheng et al. (2020)) proposes segmentation transformer network, which achieves desirable performance in semantic segmentation. Wu et al. (2021); Li et al. (2021) incorporated convolution design into transformer network by adding locally inductive biases. Swin Transformer (Liu et al. (2021)) proposes a hierarchical transformer structure to flexibly model feature representation at various scales. These work showcases the potential of transformer in vision domain. Video transformer networks. With the achievements of transformer in image domain, there also appears transformer networks for video. VTN (Neimark et al. (2021)) proposed a generic framework for video recognition, which consists of a 2D spatial backbone for feature extraction, a temporal attention-based encoder for modeling temporal dependencies of the spatial features, and a MLP head for classification. Timesformer (Bertasius et al. (2021)) adapted image transformer (Dosovitskiy et al. (2020)) architecture to video, and proposed several different self-attention schemes for transformer network design. STAM (Sharir et al. (2021)) presented a spatial-temporal transformer network, which processes sampled frames by a spatial transformer and a temporal transformer sequentially. ViViT (Arnab et al. (2021)) also proposed a pure-transformer architecture for video classification, and developed several variants, which can separate transformer’s self-attention along spatial and temporal dimensions. There are also some works on adding shortcuts between transformer blocks in NLP and image domains to evolve the features (Wang et al. (2021); He et al. (2020)). Our work is inspired by these works but more challenging. Since video learning need to capture more complex information from spatial and temporal dimensions, simple shortcuts cannot efficiently work in existing video transformers. 3 PROPOSED METHOD In Sec. 3.1, we introduce the video learning process of cross-stage transformer network. Then in Sec. 3.2, we explain the proposed cross-stage self-attention (CSSA) in details. Finally in Sec. 3.3, the cross-stage feature aggregation module (FAM) is described. 3.1 CROSS-STAGE TRANSFORMER NETWORK The cross-stage transformer (CSTransformer) network is illustrated in Figure 1. We explain each component of the workflow as follows. Input video clips. We employ ViT (Dosovitskiy et al. (2020)) as our baseline by extending transformer blocks to temporal dimension. Then we build up video learning network by adding crossstage attention and feature fusion. Let X ∈ RB×C×T×H×W be the input video clip. B denotes batch size. C denotes the number of input channels. T represents the length of the clip. W and H denote width and height of input frames respectively. We use constantW andH in our experiments. Patch embedding. In order to convert input frames into spatial patches, we firstly reshape X as X ∈ R(B×T )×C×H×W , then split X into P non-overlapped patches. The size of each patch is M ×M , and P is (H ×W )/M2. A linear layer is employed to change the channels of each patch, after which the shape of the embedding is V ∈ R(B×T )×C′× HM×WM , where C ′ represents channel dimension after the linear layer. After that, we will flatten embedding V along spatial dimensions and transpose the last two dimensions, resulting in the shape of embedding V ∈ R(B×T )×P×C′ . Classification token. After converting X into patch embedding V , we initialize a classification token Vcls ∈ R1×1×C ′ as 0, and repeat the classification token Vcls among the first dimension of V , i.e., Vcls ∈ R(B×T )×1×C ′ . Position encoding. In this step, spatial position embedding is firstly added into classification token Vcls. This operation is formulated in equation (2), where Ps ∈R1×(1+P )×C ′ denotes spatial position embedding. In equation (1), P clss ∈R1×1×C ′ and PVs ∈R1×P×C ′ are used to update classification token Vcls and token V respectively, Concat means concatenation operation. Ps = Concat[P cls s , P V s ] (1) Vcls = Vcls + P cls s (2) Through equation (2), spatial position information can be combined with classification token Vcls. Since video clips contain temporal correlations, we also introduce temporal position embedding Pt ∈ RT×1×C ′ together with V and Ps. Spatial and temporal position information are fused into patch embedding V through equation (3). Finally, as shown in equation (4), classification token Vcls will be appended into patch embedding V to form V0 ∈ R(B×T )×(1+P )×C ′ , and V0 will be fed into cross-stage transformer as input embedding sequence. V = V + PVs + Pt (3) V0 = Concat[Vcls, V ] (4) Cross-stage structure. Our proposed method consists of several spatial transformer blocks (STBs) and temporal transformer Blocks (TTBs). STB/TTB consists of of layer normalisation (LN) (Ba et al. (2016)), multi-head spatial self-attention (MSSA)/multi-head temporal self-attention (MTSA) and MLP blocks. MSSA is used to compute self-attention of spatial patches within each frame to handle the relationship between objects and scenes, which is similar to MSA in ViT (Dosovitskiy et al. (2020)). While MTSA mainly focuses on computing self-attention of co-located patches along temporal dimension, so that temporal relationships of frames can be captured. Note that input shapes for MSSA and MTSA will be reshaped as R(B×T )×(1+P )×C′ and R(B×(1+P ))×T×C′ respectively. The operations of STB and TTB are formulated in equation (5) and (6), where L represents the total blocks of cross-stage transformer. Vi is the output of ith transformer block. MSA represents multi-head self-attention process, which covers MSSA for spatial transformer blocks and MTSA for temporal transformer blocks. And MLP contains two linear layers with a GELU non-linearity. V ′i =MSA(LN(Vi−1)) + Vi−1, i = 1, ..., L (5) Vi =MLP (LN(V ′ i )) + V ′ i , i = 1, ..., L (6) Features from different spatial/temporal transformer blocks will then go through FAM for crossstage fusion, as described in equation (7), where Y is the aggregated feature. The details of crossstage self-attention and feature aggregation will be clarified in Sec. 3.2 and Sec. 3.3. Y = FAM(Vi), i = 1, ..., L (7) Figure 3: The illustration of cross-stage feature aggregation module (FAM). MLPs in STB and TTB. MLPs in ViT (Dosovitskiy et al. (2020)) usually contain two fully connected (fc) layers. Let d denotes the dimension of input feature. The first fc layer will expand the dimension d into 4 × d. Our STB follows this style, while TTB keeps the original dimension. We find that this design can achieve better accuracy and computation trade-off. Experiments of different configurations are summarized in table 1d. In the second fc layer, the dimension is changed back to d. MLP head for classification. Finally, the aggregated feature Y from FAM will go through a MLP head consisting of a LN layer and a linear layer for final video class prediction. 3.2 CROSS-STAGE SELF-ATTENTION The proposed cross-stage self-attention (CSSA) approach is simple yet efficient. The purpose of this design is to progressively fuse the self-attention from different stages to achieve better attention maps. As shown in Figure 2, the self-attention map from each STB/TTB will firstly perform an element-wise multiplication with a corresponding learnable ratio α, which can dynamically adjust the scale of corresponding self-attention. Then the scaled self-attention will be added to the selfattention from the next stage. The whole process can be defined as Equation 8: CrossAi = Softmax(Ai + αi ·Ai−1), i = 1, ..., L (8) where CrossAi and Ai represent cross-stage self-attention and self-attention of ith transformer block respectively. When i equals 1, cross-stage self-attention is original self-attention A1. αi is the learnable ratio of ith transformer block and (·) is element-wise dot product. Note that Ai is the pairwise similarity derived by the multiplication of query matrix and key matrix. CrossAi is then used to multiply with value matrix as output. Our experiments demonstrate the effectiveness of this module in both objective and subjective measurements. 3.3 CROSS-STAGE FEATURE AGGREGATION Cross-stage feature aggregation module (FAM) provides a global path for the features from different stages to better capture contextual information. The details of FAM are shown in Figure 3. We only use 4 transformer blocks for illustration. Specially, the feature from each transformer block will multiply a corresponding learnable parameter with an element-wise dot product. This parameter can scale its input feature globally. The scaled output will then be fed into a norm layer. Here we use LN layer for normalization. After that, all normalized results will be fused together for the output of FAM. It’s noteworthy that our proposed cross-stage transformer block can be easily implemented by introducing only a few additional parameters, which is negligible in complexity. The fusion process is as follows: Y = ∑ LN(βi · Vi) + VL, i = 1, ..., L− 1 (9) where βi is the learnable ratio for ith transformer block and Y is the aggregated feature. 4 EXPERIMENTS In this section, we clarify relevant experimental settings and evaluate our proposed approach in several datasets to validate its effectiveness. We introduce the datasets for evaluation in Sec. 4.1. Then we show the implementation details of our approach in Sec. 4.2. Extensive ablation studies are conducted for fully understanding the proposed approach in Sec. 4.3. In Sec. 4.4, we visualize the self-attention maps from our approach and the baseline to better understand the efficiency of cross-stage transformer. Finally, we compare our approach with other state-of-the-art methods in Sec. 4.5. 4.1 DATASETS We evaluate our approach on two large-scale video action recognition datasets, i.e., Kinetics-400 (Kay et al. (2017)) and Kinetics-600 (Carreira et al. (2018)). The details of the datasets are described below. Kinetics-400 dataset. The kinetics-400 dataset consists of training, validation and testing splits. Specifically, it contains 246536 training videos and 19761 validation videos, there are 400 human action categories, which are extracted from original YouTube videos. However, due to expired of Youtube links, there are only 234584 videos of training split. Videos in Kinetics are relatively longer and more complex, which are trimmed to around 10 seconds. Kinetics-600 dataset. The kinetics-600 dataset follows the same style of Kinetics-400 dataset, except that it extend 400 categories into 600, and the training split consists of 366,016 videos. We also use training and validation splits for training model and evaluation. 4.2 IMPLEMENTATION DETAILS Network structure. For all experiments, we adopt ”Base” architecture of ViT model (Dosovitskiy et al. (2020)) with temporal extension as our baseline, which is trained in ImageNet dataset (Krizhevsky et al. (2012)). For fair comparison, we only include the approaches using the same pre-training dataset (i.e., ImageNet-21K (Deng et al. (2009))). The structure of transformer layers in ViT-Base is the same as STB in CSTransformer network. We vary the numbers of STB and TTB, then evaluate these variants to showcase the impact of layers on our design. Top-1 accuracy of these variants are reported in table 1a, i.e., CSTransformer-V1, CSTransformer-V2 and CSTransformerV3, which will be explained in details in ablation study part. Data processing. In our experiments, we sample 8, 16 and 32 frames with temporal stride of 32, 16 and 8 respectively as input clips. The sampled input clips will be processed by color normalization, random scale jittering and uniform crop. The scale jittering range is [256, 320] and the uniform crop will slice frames into 3 spatial crops (top left, center and bottom right) of size 224 × 224. The patch size is 16 × 16. Training details. For all experiments, we use 8 × NVIDIA V100 devices. Initial learning rate is 0.005, and total epoch is 18. We use SGD optimizer with weight decay of 10−4 and momentum of 0.9 for training. Learning rate drops 10 times at epoch 5, 14 and 16. Inference settings. Whereas most existing methods use 10 temporal clips with 3 spatial crops (top-left, center and bottom-right) for inference, we only use 1 temporal clip (which is sampled in the middle of video clips) with 3 spatial crops for default setting. The final prediction is averaged softmax scores of all predictions. 4.3 ABLATION STUDIES In this section, we conduct various ablation studies on Kinetics-400, which can allow us to better understand different components’ effects for CSTransformer. Cross-stage transformer. In table 2b, we report the ablation study of the main components in crossstage transformer, i.e., cross-stage self-attention (CSSA) and feature aggregation module (FAM). We also show the result of baseline, which employs separable spatial and temporal transformer structure as depicted in Figure 1 (b) without cross-stage operations. From the table, we can see that both CSSA and FAM help improve the performance. When using them together, i.e., the whole crossstage transformer, the performance can be boosted from 77.8% to 78.7%. Model variants. We stack different number of STB and TTB to form CSTransformer, i.e., CSTransformer-V1, CSTransformer-V2 and CSTransformer-V3. The length of the input clips is 8. The detailed comparisons of various settings are shown in table 1a. Since CSTransformer-V2 structure has obtained optimal accuracy and computation trade-off, we employ it in other experiments. Does positional encoding help? In order to further understand the performance of spatial and temporal position encoding for CSTransformer. We evaluate CSTransformer-V2 with input clip length of 8. The results of using position encoding can be seen in table 1b. We can observe that by adding spatial position embedding, model’s performance has been improved from 77.6% to 78.4%. And introducing temporal embedding can further boost its performance into 78.7%. The effect of input clip length. Different clip lengths can impact the performance of proposed approach. We compare three clip lengths, i.e., 8, 16 and 32. The performance of CSTransformerV2 is illustrated in table 1c. A reasonable result can be observed that model’s performance increases as clip length becomes larger. MLP in TTB. As mentioned before, in TTB, the first fc layer in MLP doesn’t expand the original dimension, which is different from original ViT (Dosovitskiy et al. (2020)) design. We compare the performance of several expansion ratios, including 1 ×, 2 × and 4 × in table 1d, and find that expanding the dimension in cross-stage transformer doesn’t help. Therefore we keep the original dimension for TTB as in ViT. How does inference views influence performance? In video learning experiments, one needs to sample x × y video clips for evaluation, where x and y denotes the number of temporal clips and spatial crops. We wonder how does this inference sampling strategy influence the model’s performance. Therefore, we perform multiple inference settings including 1× 3, 4× 3 and 10× 3. The results of CSTransformer-V2, which is trained with input clip length of 8, is shown in table 2a. 4.4 VISUALIZATION To intuitively understand the proposed method, we visualize the self-attention map of cross-stage transformer in this section. Figure 4 shows the visualization for cross-stage transformer and the baseline in video clips from Kinetics-400. We can observe that the proposed approach can pay more attention to those areas such as hands and bee box, which are very important to understand the video contents. And it’s also interesting to see that our approach shows much less attentions in nonrelevant regions such as the background. We conjecture that cross-stage self-attention and feature aggregation can propagate important semantic information across different transformer blocks. As a result, those attentions for important areas can be gradually evolved and highlighted. 4.5 COMPARISON WITH THE STATE-OF-THE-ART In this section, we compare our method with several state-of-the-art approaches in terms of accuracy metrics and inference costs with total number of spatial and temporal views. We employ the input frame length of 32 in our evaluations. For fair comparison, we only report transformer methods’ results using the same pre-training dataset, i.e., ImageNet-21K. Moreover, since our method is implemented based on ViT structure, we only compare the video transformers based on ViT, i.e., VTN (Neimark et al. (2021)), ViViT (Arnab et al. (2021)) and TimeSformer (Bertasius et al. (2021)), to demonstrate the effectiveness of our method more clearly. There are also other video transformers (Liu et al. (2021); Fan et al. (2021)) which are much different from original ViT structure and report high performances. We will add our approach on these frameworks for comparison in the future. Note that ViT-L-ViViT with crop size 320 × 320 (the total inference cost is 3992 GFLOPs × 4 × 3 ≈ 47.9 TFLOPs) is compared in our experiment. Kinetics-400 dataset. The comparison results on Kinetics-400 are shown in table 3. In addition to accuracy metrics, we also report inference views and inference cost in terms of TFLOPs. When the inference view is 1 × 3, our approach achieves 81.2% top-1 accuracy and 94.8% top-5 accuracy. With 4 × 3 views, CSTransformer outperforms existing CNN and ViT based transformer approaches. Our approach achieves comparable performance with ViT-L-ViViT by only 8.6 % of its inference cost, since we use less views (1×3 vs. 4×3) and layers (ViT-Base vs. ViT-Large). Kinetics-600 dataset. We also evaluate our proposed approach on Kinetics-600. The results are shown in table 4. CSTransformer network achieves superior performance as well. Furthermore, Our approach consumes much less inference cost than other ViT based transformers (Bertasius et al. (2021); Arnab et al. (2021)) under the same inference views. 5 CONCLUSION In this paper, we propose a novel cross-stage transformer network for video learning, which can effectively learn video representations. In specific, we design a CSTransformer block which consists of cross-stage self-attention module (CSSA) and cross-stage feature aggregation module (FAM). We then build up a separable CSTransformer network, in which spatial CSTransformer blocks and temporal CSTransformer blocks are sequentially stacked. Extensive experiments show that our approach outperforms existing state-of-the-art CNN and ViT based transformer methods on video action recognition tasks. Due to the effectiveness of CSTransformer block, our method can achieve comparable performance to ViViT with much fewer inputs and FLOPs in inference process. Since our proposed CSSA and FCM act as independent modules, they can also be added on other video transformer frameworks. A APPENDIX A.1 CROSS-STAGE SELF-ATTENTION In this section, we further clarify the principles of proposed cross-stage self-attention. Mainsteam multi-head self-attention is proposed in Vaswani et al. (2017), which has been adopted in transformer network. We can formulate the process as follows. MultiHead(Q,K, V ) = Concat(head1, ..., headh)W O, (10) where headj = Attention(QW Q j ,KW K j , V W V j )(1 ≤ j ≤ h). h is the number of total heads. Q, K and V mean query, key and value matrices respectively. WO is the linear projection for concatenation of multiple heads’ outputs. WQj , W K j , W V j are the linear projections of key, query and value matrices for jth head. Then attention function can be written as: Attention(Q̂, K̂, V̂ ) = Softmax( Q̂K̂T√ dk )V̂ , (11) in which Q̂, K̂ and V̂ mean converted query, key, value matrices by linear projection. dk denotes dimension of input Q̂ and K̂ matrices. The attention weight Q̂K̂ T √ dk is the pairwise similarity between query and key matrices, which has been forwarded progressively in proposed CSTransformer structure. Cross MultiHead(Q,K, V ) = Concat(c head1, ..., c headh)W O. (12) In equation (12), c headj = Cross Attention(QW Q j ,KW K j , V W V j ). Cross-stage self-attention of ith (1 ≤ i ≤ n) transformer block is formulated in equation (13) and (14). n denotes total number of transformer blocks. Cross Attention(Q̂i, K̂i, V̂i) = Softmax(Ai + αi ∗Ai−1)V̂i, (13) Ai = Q̂iK̂i T √ dk , (14) where Q̂i, K̂i, V̂i are linearly projected query, key and value matrices of ith transformer block. ∗ means element-wise dot operation. ai represents a learnable ratio of ith block. We adopt multi-head cross-stage self-attention, namely Cross MultiHead(Q,K, V ), for self-attention output. Note that Ai should have the same shape with Ai−1, otherwise, we will use MultiHead(Q,K, V ) as output. A0 = 0. A.2 CSTRANSFORMER STRUCTURE To be more clear, we exlain details of CSTransformer structure. We adopt ”ViT-Base” (Dosovitskiy et al. (2020)) as our baseline. Detailed settings of ”CSTransformer-V1”, ”CSTransformer-V2” and ”CSTransformer-V3” are shown in table 5. The embedding dimension is 768; The head number is 12; The MLP sizes of STB and TTB are 3072 and 768 respectively. A.3 MORE EXPERIMENTAL ANALYSIS Here, we provide more experimental analysis and insights. The default dataset is Kinetics-400 (Kay et al. (2017)). In order to further analyze the influence caused by proposed cross-stage self-attention and features. We also show the comparison results between baseline and CSTransformer, which can be seen in figure 5. ”Baseline-V2” has the same structure of ”CSTransformer-V2”, except that it doesn’t adopt cross-stage self-attention and features. Note that in left figure, we report top-1 accuracy on validation dataset in training process, and we only sample one clip for inference in different epochs. In right figure, we test the models with the view of (1 × 3) after training. As we can see, CSTransformer structure can consistently achieve higher performance than baseline when training epoch increases. Furthermore, even with different input clip lengths, CSTransformer structure also performs better than baseline model. A.4 MORE VISUALIZATION RESULTS In this section, we provide more self-attention maps for visualization. Sampled frame clips are all from Kinetics-400 dataset. Visualization for comparison. The choosed models are ”Baseline-V2” and ”CSTransformerV2”. As shown in Figure 6 and 7. The 1st row represents original frame clips. The 2nd and 3rd rows mean self-attention maps of ”Baseline-V2” and ”CSTransformer-V2”. Note that brighter areas mean that more attention has been focused on. We can clearly observe that self-attention maps from CSTransformer structure can more focus on important objects and motion areas. However, self-attention maps from baseline may focus on some irrelevant regions. Self-attention maps of CSTransformer. We show original video clips and their self-attention maps from proposed CSTransformer-V2 in figure 8, 9, and 10. The 1st and 2nd rows of each figure are original frame clips and self-attention maps of ”CSTransformer-V2” respectively.
1. What is the focus and contribution of the paper on video classification? 2. What are the strengths of the proposed Cross-Stage Transformer architecture, particularly in terms of cross-stage self-attention and feature aggregation? 3. What are the weaknesses of the paper, especially regarding the ablation results and the limited scope of the experiments? 4. Do you have any concerns about the generalizability of the proposed approach across different datasets and tasks? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes an architecture, called Cross-Stage Transformer, for video classification. The main technical contributions are from the new design of the cross-stage self-attention (both spatial and temporal) and the cross-stage feature aggregation module. Experiments are conducted on Kinetics-400 and Kinetics-600. Although the idea of cross-stage connections (with self-attention and feature aggregation) is well-motivated, the ablation shows only small gains from baseline. The proposed approach is comparable with current methods on Kinetics-400 and Kinetics-600. Written is mostly clear except for a few places which may need clarifications. Review Strength the paper conveys the idea of using cross-stage interactions (self-attention and feature aggregation) for improving video classification which is interesting and well-motivated. the paper provides various ablations which are helpful to the readers to understand the contribution of each design choice. Weakness The gains compared with baseline are small (see Table 2b). More specific, CSSA brings +0.5 and FAM brings another +0.4 improvement over baseline. In total the proposed architecture makes an improvement of +0.9% over baseline which is not significant and somehow shows that the cross-stage interactions are not super useful for video classification. Although the paper claims to approach "video learning" as in its title, in practice it tries to solve video classification, more specific, the experiments are conducted on "action recognition". Even more narrow road, the experiments are done only on Kinetics. Kinetics-400 and Kinetics-600 are still two variants of the same dataset. In short, experiments are done in one dataset. Accuracy (by the proposed method) is comparable with current methods (when compared with the similar backbone and same training data).
ICLR
Title Cross-Stage Transformer for Video Learning Abstract Transformer network has been proved efficient in modeling long-range dependencies in video learning. However, videos contain rich contextual information in both spatial and temporal dimensions, e.g., scenes and temporal reasoning. In traditional transformer networks, stacked transformer blocks work in a sequential and independent way, which may lead to the inefficient propagation of such contextual information. To address this problem, we propose a cross-stage transformer paradigm, which allows to fuse self-attentions and features from different blocks. By inserting the proposed cross-stage mechanism in existing spatial and temporal transformer blocks, we build a separable transformer network for video learning based on ViT structure, in which self-attentions and features are progressively aggregated from one block to the next. Extensive experiments show that our approach outperforms existing ViT based video transformer approaches with the same pre-training dataset on mainstream video action recognition datasets of Kinetics-400 (Top-1 accuracy 81.8%) and Kinetics-600 (Top-1 accuracy 84.0%). Due to the effectiveness of cross-stage transformer, our proposed method achieves comparable performance with other ViT based approaches with much lower computation cost (e.g., 8.6% of ViViT’s FLOPs) in inference process. As an independent module, our proposed method can be conveniently added on other video transformer frameworks. 1 INTRODUCTION Convolution neural network (CNN) has been successfully applied on computer vision tasks, such as classification (Krizhevsky et al. (2012); He et al. (2016)), detection (Girshick (2015); Ren et al. (2015)) and segmentation (He et al. (2017)). However, due to the limited receptive field, CNN lacks the ability of modeling long-range dependencies, which is an obstacle to capture the spatial and temporal contexts in video learning. To overcome this weakness, self-attention mechanism is introduced into CNN structure and obtains excellent performance (Wang et al. (2018); Guo et al. (2021)). Recently, convolution-free transformer structure consisting of self-attention layers (Vaswani et al. (2017)) is also investigated in vision domain (Dosovitskiy et al. (2020); Carion et al. (2020)). Transformer achieved extreme success in natural language processing (NLP) (Vaswani et al. (2017); Devlin et al. (2018); Yang et al. (2019); Dai et al. (2019)). The inherent similar requirement between video and language learning, i.e., capturing the long-range contextual information, makes people believe that it can also work for video tasks. The first attempt to apply pure transformer network for vision is Vision Transformer (ViT) (Dosovitskiy et al. (2020)), which aims at image classification. The input images are split into several patches, which are then linearly embedded into tokens for the transformer blocks. A classification head is attached at the top of these transformer blocks for final prediction. Bertasius et al. (Bertasius et al. (2021)) and Arnab et al. (Arnab et al. (2021)) extend the scheme to video learning by adding temporal transformer blocks. Pure transformer network shows comparable performance with CNN based methods, as well as the potential in vision domain. However, there are still uncertainties by processing video data in the way analogous to language. On one hand, video patches contain rich spatial and temporal contents, so that it is difficult to map them into precise semantic tokens like words. Thus, the correlations established by transformer blocks may lead to ambiguous semantics. This drawback becomes even worse for videos with complex scenes and actions. On the other hand, the absence of convolutions in a transformer network will damage local contexts capturing, so that the features built across transformer blocks may have inefficient information propagation. To tackle the aforementioned problems, we try to re-design the transformer blocks. Inspired by the empirically long-standing principle in CNN based approaches, i.e., features extracted from different stages can be fused together to improve learning (Lin et al. (2017a); He et al. (2017); Lin et al. (2017b); Redmon & Farhadi (2018)), we expect the cross-stage fusion can also help improve the performance of transformer. Based on above analysis, we propose a novel cross-stage transformer block which consists of crossstage self-attention (CSSA) and cross-stage feature aggregation module (FAM). The former aims to progressively enhance the self-attention maps by adding shortcuts between self-attentions from two consecutive transformer blocks. The later fuses the features from different stages to achieve better outputs. We then build up a separable spatial-temporal transformer network, in which spatial crossstage transformer and temporal cross-stage transformer are sequentially stacked. Extensive experiments show that, under the same conditions, i.e., base transformer structure and pre-training dataset, our approach outperforms existing ViT based video transformers on video action recognition tasks. Due to the effectiveness of cross-stage fusion, our method can achieve comparable performance to ViViT (Arnab et al. (2021)) with much fewer FLOPs in inference process. As a generic module, the cross-stage transformer can also be inserted into other transformer based frameworks. The contributions of this work can be summarized as follows: 1. A novel cross-stage transformer block, consisting of cross-stage self-attention module and cross-stage feature aggregation module, is proposed. Meanwhile, we also establish a separable cross-stage transformer network for video learning. 2. Extensive experiments are conducted to provide sufficient information for better understanding our approach, thereby provide an insight into the design of transformer in video learning. 3. Using the same pre-training dataset as existing transformer methods, our approach outperforms other ViT based video transformers and CNN methods on video action recognition datasets, including Kinetics-400 and Kinetics-600. It can also be added in other frameworks to promote the performance. 2 RELATED WORK Video action recognition. Extensive efforts have been put on video action recognition in recent years. The mainstream approaches usually utilize 2D or 3D based CNN for video feature extraction (Carreira & Zisserman (2017); Christoph & Pinz (2016); Tran et al. (2015); Ji et al. (2012); Tran et al. (2018); Simonyan & Zisserman (2014); Wang et al. (2016)). I3D (Carreira & Zisserman (2017)) is a representative of 3D based methods, which inflates 2D convolution layers into 3D to save the huge computational cost in pre-training 3D networks. Non-Local Neural Networks (Wang et al. (2018)) introduces self-attention into CNN, which can capture long-range dependencies and richer information of input video frames. Guo et al. (2021) proposes a separable self-attention network and achieve excellent performance on video action recognition. SlowFast (Feichtenhofer et al. (2019)) proposes a two-pathway network, using slow and fast temporal rates of video frames at the same time, in which features are fused from fast pathway into slow one. X3D (Feichtenhofer (2020)) explores different network settings based on SlowFast, and significantly boosts the performance. Recently, the research efforts are shifting to transformer based methods. Image transformer networks. Self-attention network (Vaswani et al. (2017)), also known as transformer, has achieved state-of-the-art performance in NLP domain (Vaswani et al. (2017); Devlin et al. (2018); Yang et al. (2019); Dai et al. (2019)). This success inspires more and more research efforts on applying transformer to computer vision tasks. ViT (Dosovitskiy et al. (2020)) and DeiT (Touvron et al. (2020)) successfully show that pure transformer network can achieve state-of-the-art performance in image classification task. In Carion et al. (2020), a transformer-based network is proposed for object detection, and obtains comparative performance with Faster-RCNN (Ren et al. (2015)). SETR (Zheng et al. (2020)) proposes segmentation transformer network, which achieves desirable performance in semantic segmentation. Wu et al. (2021); Li et al. (2021) incorporated convolution design into transformer network by adding locally inductive biases. Swin Transformer (Liu et al. (2021)) proposes a hierarchical transformer structure to flexibly model feature representation at various scales. These work showcases the potential of transformer in vision domain. Video transformer networks. With the achievements of transformer in image domain, there also appears transformer networks for video. VTN (Neimark et al. (2021)) proposed a generic framework for video recognition, which consists of a 2D spatial backbone for feature extraction, a temporal attention-based encoder for modeling temporal dependencies of the spatial features, and a MLP head for classification. Timesformer (Bertasius et al. (2021)) adapted image transformer (Dosovitskiy et al. (2020)) architecture to video, and proposed several different self-attention schemes for transformer network design. STAM (Sharir et al. (2021)) presented a spatial-temporal transformer network, which processes sampled frames by a spatial transformer and a temporal transformer sequentially. ViViT (Arnab et al. (2021)) also proposed a pure-transformer architecture for video classification, and developed several variants, which can separate transformer’s self-attention along spatial and temporal dimensions. There are also some works on adding shortcuts between transformer blocks in NLP and image domains to evolve the features (Wang et al. (2021); He et al. (2020)). Our work is inspired by these works but more challenging. Since video learning need to capture more complex information from spatial and temporal dimensions, simple shortcuts cannot efficiently work in existing video transformers. 3 PROPOSED METHOD In Sec. 3.1, we introduce the video learning process of cross-stage transformer network. Then in Sec. 3.2, we explain the proposed cross-stage self-attention (CSSA) in details. Finally in Sec. 3.3, the cross-stage feature aggregation module (FAM) is described. 3.1 CROSS-STAGE TRANSFORMER NETWORK The cross-stage transformer (CSTransformer) network is illustrated in Figure 1. We explain each component of the workflow as follows. Input video clips. We employ ViT (Dosovitskiy et al. (2020)) as our baseline by extending transformer blocks to temporal dimension. Then we build up video learning network by adding crossstage attention and feature fusion. Let X ∈ RB×C×T×H×W be the input video clip. B denotes batch size. C denotes the number of input channels. T represents the length of the clip. W and H denote width and height of input frames respectively. We use constantW andH in our experiments. Patch embedding. In order to convert input frames into spatial patches, we firstly reshape X as X ∈ R(B×T )×C×H×W , then split X into P non-overlapped patches. The size of each patch is M ×M , and P is (H ×W )/M2. A linear layer is employed to change the channels of each patch, after which the shape of the embedding is V ∈ R(B×T )×C′× HM×WM , where C ′ represents channel dimension after the linear layer. After that, we will flatten embedding V along spatial dimensions and transpose the last two dimensions, resulting in the shape of embedding V ∈ R(B×T )×P×C′ . Classification token. After converting X into patch embedding V , we initialize a classification token Vcls ∈ R1×1×C ′ as 0, and repeat the classification token Vcls among the first dimension of V , i.e., Vcls ∈ R(B×T )×1×C ′ . Position encoding. In this step, spatial position embedding is firstly added into classification token Vcls. This operation is formulated in equation (2), where Ps ∈R1×(1+P )×C ′ denotes spatial position embedding. In equation (1), P clss ∈R1×1×C ′ and PVs ∈R1×P×C ′ are used to update classification token Vcls and token V respectively, Concat means concatenation operation. Ps = Concat[P cls s , P V s ] (1) Vcls = Vcls + P cls s (2) Through equation (2), spatial position information can be combined with classification token Vcls. Since video clips contain temporal correlations, we also introduce temporal position embedding Pt ∈ RT×1×C ′ together with V and Ps. Spatial and temporal position information are fused into patch embedding V through equation (3). Finally, as shown in equation (4), classification token Vcls will be appended into patch embedding V to form V0 ∈ R(B×T )×(1+P )×C ′ , and V0 will be fed into cross-stage transformer as input embedding sequence. V = V + PVs + Pt (3) V0 = Concat[Vcls, V ] (4) Cross-stage structure. Our proposed method consists of several spatial transformer blocks (STBs) and temporal transformer Blocks (TTBs). STB/TTB consists of of layer normalisation (LN) (Ba et al. (2016)), multi-head spatial self-attention (MSSA)/multi-head temporal self-attention (MTSA) and MLP blocks. MSSA is used to compute self-attention of spatial patches within each frame to handle the relationship between objects and scenes, which is similar to MSA in ViT (Dosovitskiy et al. (2020)). While MTSA mainly focuses on computing self-attention of co-located patches along temporal dimension, so that temporal relationships of frames can be captured. Note that input shapes for MSSA and MTSA will be reshaped as R(B×T )×(1+P )×C′ and R(B×(1+P ))×T×C′ respectively. The operations of STB and TTB are formulated in equation (5) and (6), where L represents the total blocks of cross-stage transformer. Vi is the output of ith transformer block. MSA represents multi-head self-attention process, which covers MSSA for spatial transformer blocks and MTSA for temporal transformer blocks. And MLP contains two linear layers with a GELU non-linearity. V ′i =MSA(LN(Vi−1)) + Vi−1, i = 1, ..., L (5) Vi =MLP (LN(V ′ i )) + V ′ i , i = 1, ..., L (6) Features from different spatial/temporal transformer blocks will then go through FAM for crossstage fusion, as described in equation (7), where Y is the aggregated feature. The details of crossstage self-attention and feature aggregation will be clarified in Sec. 3.2 and Sec. 3.3. Y = FAM(Vi), i = 1, ..., L (7) Figure 3: The illustration of cross-stage feature aggregation module (FAM). MLPs in STB and TTB. MLPs in ViT (Dosovitskiy et al. (2020)) usually contain two fully connected (fc) layers. Let d denotes the dimension of input feature. The first fc layer will expand the dimension d into 4 × d. Our STB follows this style, while TTB keeps the original dimension. We find that this design can achieve better accuracy and computation trade-off. Experiments of different configurations are summarized in table 1d. In the second fc layer, the dimension is changed back to d. MLP head for classification. Finally, the aggregated feature Y from FAM will go through a MLP head consisting of a LN layer and a linear layer for final video class prediction. 3.2 CROSS-STAGE SELF-ATTENTION The proposed cross-stage self-attention (CSSA) approach is simple yet efficient. The purpose of this design is to progressively fuse the self-attention from different stages to achieve better attention maps. As shown in Figure 2, the self-attention map from each STB/TTB will firstly perform an element-wise multiplication with a corresponding learnable ratio α, which can dynamically adjust the scale of corresponding self-attention. Then the scaled self-attention will be added to the selfattention from the next stage. The whole process can be defined as Equation 8: CrossAi = Softmax(Ai + αi ·Ai−1), i = 1, ..., L (8) where CrossAi and Ai represent cross-stage self-attention and self-attention of ith transformer block respectively. When i equals 1, cross-stage self-attention is original self-attention A1. αi is the learnable ratio of ith transformer block and (·) is element-wise dot product. Note that Ai is the pairwise similarity derived by the multiplication of query matrix and key matrix. CrossAi is then used to multiply with value matrix as output. Our experiments demonstrate the effectiveness of this module in both objective and subjective measurements. 3.3 CROSS-STAGE FEATURE AGGREGATION Cross-stage feature aggregation module (FAM) provides a global path for the features from different stages to better capture contextual information. The details of FAM are shown in Figure 3. We only use 4 transformer blocks for illustration. Specially, the feature from each transformer block will multiply a corresponding learnable parameter with an element-wise dot product. This parameter can scale its input feature globally. The scaled output will then be fed into a norm layer. Here we use LN layer for normalization. After that, all normalized results will be fused together for the output of FAM. It’s noteworthy that our proposed cross-stage transformer block can be easily implemented by introducing only a few additional parameters, which is negligible in complexity. The fusion process is as follows: Y = ∑ LN(βi · Vi) + VL, i = 1, ..., L− 1 (9) where βi is the learnable ratio for ith transformer block and Y is the aggregated feature. 4 EXPERIMENTS In this section, we clarify relevant experimental settings and evaluate our proposed approach in several datasets to validate its effectiveness. We introduce the datasets for evaluation in Sec. 4.1. Then we show the implementation details of our approach in Sec. 4.2. Extensive ablation studies are conducted for fully understanding the proposed approach in Sec. 4.3. In Sec. 4.4, we visualize the self-attention maps from our approach and the baseline to better understand the efficiency of cross-stage transformer. Finally, we compare our approach with other state-of-the-art methods in Sec. 4.5. 4.1 DATASETS We evaluate our approach on two large-scale video action recognition datasets, i.e., Kinetics-400 (Kay et al. (2017)) and Kinetics-600 (Carreira et al. (2018)). The details of the datasets are described below. Kinetics-400 dataset. The kinetics-400 dataset consists of training, validation and testing splits. Specifically, it contains 246536 training videos and 19761 validation videos, there are 400 human action categories, which are extracted from original YouTube videos. However, due to expired of Youtube links, there are only 234584 videos of training split. Videos in Kinetics are relatively longer and more complex, which are trimmed to around 10 seconds. Kinetics-600 dataset. The kinetics-600 dataset follows the same style of Kinetics-400 dataset, except that it extend 400 categories into 600, and the training split consists of 366,016 videos. We also use training and validation splits for training model and evaluation. 4.2 IMPLEMENTATION DETAILS Network structure. For all experiments, we adopt ”Base” architecture of ViT model (Dosovitskiy et al. (2020)) with temporal extension as our baseline, which is trained in ImageNet dataset (Krizhevsky et al. (2012)). For fair comparison, we only include the approaches using the same pre-training dataset (i.e., ImageNet-21K (Deng et al. (2009))). The structure of transformer layers in ViT-Base is the same as STB in CSTransformer network. We vary the numbers of STB and TTB, then evaluate these variants to showcase the impact of layers on our design. Top-1 accuracy of these variants are reported in table 1a, i.e., CSTransformer-V1, CSTransformer-V2 and CSTransformerV3, which will be explained in details in ablation study part. Data processing. In our experiments, we sample 8, 16 and 32 frames with temporal stride of 32, 16 and 8 respectively as input clips. The sampled input clips will be processed by color normalization, random scale jittering and uniform crop. The scale jittering range is [256, 320] and the uniform crop will slice frames into 3 spatial crops (top left, center and bottom right) of size 224 × 224. The patch size is 16 × 16. Training details. For all experiments, we use 8 × NVIDIA V100 devices. Initial learning rate is 0.005, and total epoch is 18. We use SGD optimizer with weight decay of 10−4 and momentum of 0.9 for training. Learning rate drops 10 times at epoch 5, 14 and 16. Inference settings. Whereas most existing methods use 10 temporal clips with 3 spatial crops (top-left, center and bottom-right) for inference, we only use 1 temporal clip (which is sampled in the middle of video clips) with 3 spatial crops for default setting. The final prediction is averaged softmax scores of all predictions. 4.3 ABLATION STUDIES In this section, we conduct various ablation studies on Kinetics-400, which can allow us to better understand different components’ effects for CSTransformer. Cross-stage transformer. In table 2b, we report the ablation study of the main components in crossstage transformer, i.e., cross-stage self-attention (CSSA) and feature aggregation module (FAM). We also show the result of baseline, which employs separable spatial and temporal transformer structure as depicted in Figure 1 (b) without cross-stage operations. From the table, we can see that both CSSA and FAM help improve the performance. When using them together, i.e., the whole crossstage transformer, the performance can be boosted from 77.8% to 78.7%. Model variants. We stack different number of STB and TTB to form CSTransformer, i.e., CSTransformer-V1, CSTransformer-V2 and CSTransformer-V3. The length of the input clips is 8. The detailed comparisons of various settings are shown in table 1a. Since CSTransformer-V2 structure has obtained optimal accuracy and computation trade-off, we employ it in other experiments. Does positional encoding help? In order to further understand the performance of spatial and temporal position encoding for CSTransformer. We evaluate CSTransformer-V2 with input clip length of 8. The results of using position encoding can be seen in table 1b. We can observe that by adding spatial position embedding, model’s performance has been improved from 77.6% to 78.4%. And introducing temporal embedding can further boost its performance into 78.7%. The effect of input clip length. Different clip lengths can impact the performance of proposed approach. We compare three clip lengths, i.e., 8, 16 and 32. The performance of CSTransformerV2 is illustrated in table 1c. A reasonable result can be observed that model’s performance increases as clip length becomes larger. MLP in TTB. As mentioned before, in TTB, the first fc layer in MLP doesn’t expand the original dimension, which is different from original ViT (Dosovitskiy et al. (2020)) design. We compare the performance of several expansion ratios, including 1 ×, 2 × and 4 × in table 1d, and find that expanding the dimension in cross-stage transformer doesn’t help. Therefore we keep the original dimension for TTB as in ViT. How does inference views influence performance? In video learning experiments, one needs to sample x × y video clips for evaluation, where x and y denotes the number of temporal clips and spatial crops. We wonder how does this inference sampling strategy influence the model’s performance. Therefore, we perform multiple inference settings including 1× 3, 4× 3 and 10× 3. The results of CSTransformer-V2, which is trained with input clip length of 8, is shown in table 2a. 4.4 VISUALIZATION To intuitively understand the proposed method, we visualize the self-attention map of cross-stage transformer in this section. Figure 4 shows the visualization for cross-stage transformer and the baseline in video clips from Kinetics-400. We can observe that the proposed approach can pay more attention to those areas such as hands and bee box, which are very important to understand the video contents. And it’s also interesting to see that our approach shows much less attentions in nonrelevant regions such as the background. We conjecture that cross-stage self-attention and feature aggregation can propagate important semantic information across different transformer blocks. As a result, those attentions for important areas can be gradually evolved and highlighted. 4.5 COMPARISON WITH THE STATE-OF-THE-ART In this section, we compare our method with several state-of-the-art approaches in terms of accuracy metrics and inference costs with total number of spatial and temporal views. We employ the input frame length of 32 in our evaluations. For fair comparison, we only report transformer methods’ results using the same pre-training dataset, i.e., ImageNet-21K. Moreover, since our method is implemented based on ViT structure, we only compare the video transformers based on ViT, i.e., VTN (Neimark et al. (2021)), ViViT (Arnab et al. (2021)) and TimeSformer (Bertasius et al. (2021)), to demonstrate the effectiveness of our method more clearly. There are also other video transformers (Liu et al. (2021); Fan et al. (2021)) which are much different from original ViT structure and report high performances. We will add our approach on these frameworks for comparison in the future. Note that ViT-L-ViViT with crop size 320 × 320 (the total inference cost is 3992 GFLOPs × 4 × 3 ≈ 47.9 TFLOPs) is compared in our experiment. Kinetics-400 dataset. The comparison results on Kinetics-400 are shown in table 3. In addition to accuracy metrics, we also report inference views and inference cost in terms of TFLOPs. When the inference view is 1 × 3, our approach achieves 81.2% top-1 accuracy and 94.8% top-5 accuracy. With 4 × 3 views, CSTransformer outperforms existing CNN and ViT based transformer approaches. Our approach achieves comparable performance with ViT-L-ViViT by only 8.6 % of its inference cost, since we use less views (1×3 vs. 4×3) and layers (ViT-Base vs. ViT-Large). Kinetics-600 dataset. We also evaluate our proposed approach on Kinetics-600. The results are shown in table 4. CSTransformer network achieves superior performance as well. Furthermore, Our approach consumes much less inference cost than other ViT based transformers (Bertasius et al. (2021); Arnab et al. (2021)) under the same inference views. 5 CONCLUSION In this paper, we propose a novel cross-stage transformer network for video learning, which can effectively learn video representations. In specific, we design a CSTransformer block which consists of cross-stage self-attention module (CSSA) and cross-stage feature aggregation module (FAM). We then build up a separable CSTransformer network, in which spatial CSTransformer blocks and temporal CSTransformer blocks are sequentially stacked. Extensive experiments show that our approach outperforms existing state-of-the-art CNN and ViT based transformer methods on video action recognition tasks. Due to the effectiveness of CSTransformer block, our method can achieve comparable performance to ViViT with much fewer inputs and FLOPs in inference process. Since our proposed CSSA and FCM act as independent modules, they can also be added on other video transformer frameworks. A APPENDIX A.1 CROSS-STAGE SELF-ATTENTION In this section, we further clarify the principles of proposed cross-stage self-attention. Mainsteam multi-head self-attention is proposed in Vaswani et al. (2017), which has been adopted in transformer network. We can formulate the process as follows. MultiHead(Q,K, V ) = Concat(head1, ..., headh)W O, (10) where headj = Attention(QW Q j ,KW K j , V W V j )(1 ≤ j ≤ h). h is the number of total heads. Q, K and V mean query, key and value matrices respectively. WO is the linear projection for concatenation of multiple heads’ outputs. WQj , W K j , W V j are the linear projections of key, query and value matrices for jth head. Then attention function can be written as: Attention(Q̂, K̂, V̂ ) = Softmax( Q̂K̂T√ dk )V̂ , (11) in which Q̂, K̂ and V̂ mean converted query, key, value matrices by linear projection. dk denotes dimension of input Q̂ and K̂ matrices. The attention weight Q̂K̂ T √ dk is the pairwise similarity between query and key matrices, which has been forwarded progressively in proposed CSTransformer structure. Cross MultiHead(Q,K, V ) = Concat(c head1, ..., c headh)W O. (12) In equation (12), c headj = Cross Attention(QW Q j ,KW K j , V W V j ). Cross-stage self-attention of ith (1 ≤ i ≤ n) transformer block is formulated in equation (13) and (14). n denotes total number of transformer blocks. Cross Attention(Q̂i, K̂i, V̂i) = Softmax(Ai + αi ∗Ai−1)V̂i, (13) Ai = Q̂iK̂i T √ dk , (14) where Q̂i, K̂i, V̂i are linearly projected query, key and value matrices of ith transformer block. ∗ means element-wise dot operation. ai represents a learnable ratio of ith block. We adopt multi-head cross-stage self-attention, namely Cross MultiHead(Q,K, V ), for self-attention output. Note that Ai should have the same shape with Ai−1, otherwise, we will use MultiHead(Q,K, V ) as output. A0 = 0. A.2 CSTRANSFORMER STRUCTURE To be more clear, we exlain details of CSTransformer structure. We adopt ”ViT-Base” (Dosovitskiy et al. (2020)) as our baseline. Detailed settings of ”CSTransformer-V1”, ”CSTransformer-V2” and ”CSTransformer-V3” are shown in table 5. The embedding dimension is 768; The head number is 12; The MLP sizes of STB and TTB are 3072 and 768 respectively. A.3 MORE EXPERIMENTAL ANALYSIS Here, we provide more experimental analysis and insights. The default dataset is Kinetics-400 (Kay et al. (2017)). In order to further analyze the influence caused by proposed cross-stage self-attention and features. We also show the comparison results between baseline and CSTransformer, which can be seen in figure 5. ”Baseline-V2” has the same structure of ”CSTransformer-V2”, except that it doesn’t adopt cross-stage self-attention and features. Note that in left figure, we report top-1 accuracy on validation dataset in training process, and we only sample one clip for inference in different epochs. In right figure, we test the models with the view of (1 × 3) after training. As we can see, CSTransformer structure can consistently achieve higher performance than baseline when training epoch increases. Furthermore, even with different input clip lengths, CSTransformer structure also performs better than baseline model. A.4 MORE VISUALIZATION RESULTS In this section, we provide more self-attention maps for visualization. Sampled frame clips are all from Kinetics-400 dataset. Visualization for comparison. The choosed models are ”Baseline-V2” and ”CSTransformerV2”. As shown in Figure 6 and 7. The 1st row represents original frame clips. The 2nd and 3rd rows mean self-attention maps of ”Baseline-V2” and ”CSTransformer-V2”. Note that brighter areas mean that more attention has been focused on. We can clearly observe that self-attention maps from CSTransformer structure can more focus on important objects and motion areas. However, self-attention maps from baseline may focus on some irrelevant regions. Self-attention maps of CSTransformer. We show original video clips and their self-attention maps from proposed CSTransformer-V2 in figure 8, 9, and 10. The 1st and 2nd rows of each figure are original frame clips and self-attention maps of ”CSTransformer-V2” respectively.
1. What are the key contributions of the paper in terms of improving video transformer performance? 2. What are the strengths and weaknesses of the proposed approach compared to existing works? 3. How does the reviewer assess the significance and impact of the proposed improvements? 4. What are the limitations of the paper regarding its claims and experimental setup? 5. How could the authors improve their work to make it more substantial and convincing?
Summary Of The Paper Review
Summary Of The Paper This paper presents two simple extensions to existing work on Video Transformers that are shown to improve the accuracy. The improvements are: (a) previously computed attention masks are added up to the one from the current layer and renormalised. Features from intermediate layers are aggregated to obtain a final feature for classification. The results show some decent increase in performance over the baseline. Review On the positive side the paper presents two very simple improvements to transformers that seem to work. On the negative side the novelty of the proposed improvements is not so significant and the accuracy improvement is decent but not impressive. Actually, the paper's main contribution i.e. cross-stage self-attention improves only by 0.5%. Moreover, the authors need to clarify in more detail the differences of their baseline model with ViVit and Timesformer. This would help the readers understand to what extent the improvements are coming from the baseline and to what extent from the proposed improvements. I think the authors first need to ablate the components of their baseline and then on top show the impact of their proposed improvements (cross-stage self-attention + feature aggregation). Finally, reporting results on one dataset only is insufficient for a high quality conference like ICLR.
ICLR
Title Cross-Stage Transformer for Video Learning Abstract Transformer network has been proved efficient in modeling long-range dependencies in video learning. However, videos contain rich contextual information in both spatial and temporal dimensions, e.g., scenes and temporal reasoning. In traditional transformer networks, stacked transformer blocks work in a sequential and independent way, which may lead to the inefficient propagation of such contextual information. To address this problem, we propose a cross-stage transformer paradigm, which allows to fuse self-attentions and features from different blocks. By inserting the proposed cross-stage mechanism in existing spatial and temporal transformer blocks, we build a separable transformer network for video learning based on ViT structure, in which self-attentions and features are progressively aggregated from one block to the next. Extensive experiments show that our approach outperforms existing ViT based video transformer approaches with the same pre-training dataset on mainstream video action recognition datasets of Kinetics-400 (Top-1 accuracy 81.8%) and Kinetics-600 (Top-1 accuracy 84.0%). Due to the effectiveness of cross-stage transformer, our proposed method achieves comparable performance with other ViT based approaches with much lower computation cost (e.g., 8.6% of ViViT’s FLOPs) in inference process. As an independent module, our proposed method can be conveniently added on other video transformer frameworks. 1 INTRODUCTION Convolution neural network (CNN) has been successfully applied on computer vision tasks, such as classification (Krizhevsky et al. (2012); He et al. (2016)), detection (Girshick (2015); Ren et al. (2015)) and segmentation (He et al. (2017)). However, due to the limited receptive field, CNN lacks the ability of modeling long-range dependencies, which is an obstacle to capture the spatial and temporal contexts in video learning. To overcome this weakness, self-attention mechanism is introduced into CNN structure and obtains excellent performance (Wang et al. (2018); Guo et al. (2021)). Recently, convolution-free transformer structure consisting of self-attention layers (Vaswani et al. (2017)) is also investigated in vision domain (Dosovitskiy et al. (2020); Carion et al. (2020)). Transformer achieved extreme success in natural language processing (NLP) (Vaswani et al. (2017); Devlin et al. (2018); Yang et al. (2019); Dai et al. (2019)). The inherent similar requirement between video and language learning, i.e., capturing the long-range contextual information, makes people believe that it can also work for video tasks. The first attempt to apply pure transformer network for vision is Vision Transformer (ViT) (Dosovitskiy et al. (2020)), which aims at image classification. The input images are split into several patches, which are then linearly embedded into tokens for the transformer blocks. A classification head is attached at the top of these transformer blocks for final prediction. Bertasius et al. (Bertasius et al. (2021)) and Arnab et al. (Arnab et al. (2021)) extend the scheme to video learning by adding temporal transformer blocks. Pure transformer network shows comparable performance with CNN based methods, as well as the potential in vision domain. However, there are still uncertainties by processing video data in the way analogous to language. On one hand, video patches contain rich spatial and temporal contents, so that it is difficult to map them into precise semantic tokens like words. Thus, the correlations established by transformer blocks may lead to ambiguous semantics. This drawback becomes even worse for videos with complex scenes and actions. On the other hand, the absence of convolutions in a transformer network will damage local contexts capturing, so that the features built across transformer blocks may have inefficient information propagation. To tackle the aforementioned problems, we try to re-design the transformer blocks. Inspired by the empirically long-standing principle in CNN based approaches, i.e., features extracted from different stages can be fused together to improve learning (Lin et al. (2017a); He et al. (2017); Lin et al. (2017b); Redmon & Farhadi (2018)), we expect the cross-stage fusion can also help improve the performance of transformer. Based on above analysis, we propose a novel cross-stage transformer block which consists of crossstage self-attention (CSSA) and cross-stage feature aggregation module (FAM). The former aims to progressively enhance the self-attention maps by adding shortcuts between self-attentions from two consecutive transformer blocks. The later fuses the features from different stages to achieve better outputs. We then build up a separable spatial-temporal transformer network, in which spatial crossstage transformer and temporal cross-stage transformer are sequentially stacked. Extensive experiments show that, under the same conditions, i.e., base transformer structure and pre-training dataset, our approach outperforms existing ViT based video transformers on video action recognition tasks. Due to the effectiveness of cross-stage fusion, our method can achieve comparable performance to ViViT (Arnab et al. (2021)) with much fewer FLOPs in inference process. As a generic module, the cross-stage transformer can also be inserted into other transformer based frameworks. The contributions of this work can be summarized as follows: 1. A novel cross-stage transformer block, consisting of cross-stage self-attention module and cross-stage feature aggregation module, is proposed. Meanwhile, we also establish a separable cross-stage transformer network for video learning. 2. Extensive experiments are conducted to provide sufficient information for better understanding our approach, thereby provide an insight into the design of transformer in video learning. 3. Using the same pre-training dataset as existing transformer methods, our approach outperforms other ViT based video transformers and CNN methods on video action recognition datasets, including Kinetics-400 and Kinetics-600. It can also be added in other frameworks to promote the performance. 2 RELATED WORK Video action recognition. Extensive efforts have been put on video action recognition in recent years. The mainstream approaches usually utilize 2D or 3D based CNN for video feature extraction (Carreira & Zisserman (2017); Christoph & Pinz (2016); Tran et al. (2015); Ji et al. (2012); Tran et al. (2018); Simonyan & Zisserman (2014); Wang et al. (2016)). I3D (Carreira & Zisserman (2017)) is a representative of 3D based methods, which inflates 2D convolution layers into 3D to save the huge computational cost in pre-training 3D networks. Non-Local Neural Networks (Wang et al. (2018)) introduces self-attention into CNN, which can capture long-range dependencies and richer information of input video frames. Guo et al. (2021) proposes a separable self-attention network and achieve excellent performance on video action recognition. SlowFast (Feichtenhofer et al. (2019)) proposes a two-pathway network, using slow and fast temporal rates of video frames at the same time, in which features are fused from fast pathway into slow one. X3D (Feichtenhofer (2020)) explores different network settings based on SlowFast, and significantly boosts the performance. Recently, the research efforts are shifting to transformer based methods. Image transformer networks. Self-attention network (Vaswani et al. (2017)), also known as transformer, has achieved state-of-the-art performance in NLP domain (Vaswani et al. (2017); Devlin et al. (2018); Yang et al. (2019); Dai et al. (2019)). This success inspires more and more research efforts on applying transformer to computer vision tasks. ViT (Dosovitskiy et al. (2020)) and DeiT (Touvron et al. (2020)) successfully show that pure transformer network can achieve state-of-the-art performance in image classification task. In Carion et al. (2020), a transformer-based network is proposed for object detection, and obtains comparative performance with Faster-RCNN (Ren et al. (2015)). SETR (Zheng et al. (2020)) proposes segmentation transformer network, which achieves desirable performance in semantic segmentation. Wu et al. (2021); Li et al. (2021) incorporated convolution design into transformer network by adding locally inductive biases. Swin Transformer (Liu et al. (2021)) proposes a hierarchical transformer structure to flexibly model feature representation at various scales. These work showcases the potential of transformer in vision domain. Video transformer networks. With the achievements of transformer in image domain, there also appears transformer networks for video. VTN (Neimark et al. (2021)) proposed a generic framework for video recognition, which consists of a 2D spatial backbone for feature extraction, a temporal attention-based encoder for modeling temporal dependencies of the spatial features, and a MLP head for classification. Timesformer (Bertasius et al. (2021)) adapted image transformer (Dosovitskiy et al. (2020)) architecture to video, and proposed several different self-attention schemes for transformer network design. STAM (Sharir et al. (2021)) presented a spatial-temporal transformer network, which processes sampled frames by a spatial transformer and a temporal transformer sequentially. ViViT (Arnab et al. (2021)) also proposed a pure-transformer architecture for video classification, and developed several variants, which can separate transformer’s self-attention along spatial and temporal dimensions. There are also some works on adding shortcuts between transformer blocks in NLP and image domains to evolve the features (Wang et al. (2021); He et al. (2020)). Our work is inspired by these works but more challenging. Since video learning need to capture more complex information from spatial and temporal dimensions, simple shortcuts cannot efficiently work in existing video transformers. 3 PROPOSED METHOD In Sec. 3.1, we introduce the video learning process of cross-stage transformer network. Then in Sec. 3.2, we explain the proposed cross-stage self-attention (CSSA) in details. Finally in Sec. 3.3, the cross-stage feature aggregation module (FAM) is described. 3.1 CROSS-STAGE TRANSFORMER NETWORK The cross-stage transformer (CSTransformer) network is illustrated in Figure 1. We explain each component of the workflow as follows. Input video clips. We employ ViT (Dosovitskiy et al. (2020)) as our baseline by extending transformer blocks to temporal dimension. Then we build up video learning network by adding crossstage attention and feature fusion. Let X ∈ RB×C×T×H×W be the input video clip. B denotes batch size. C denotes the number of input channels. T represents the length of the clip. W and H denote width and height of input frames respectively. We use constantW andH in our experiments. Patch embedding. In order to convert input frames into spatial patches, we firstly reshape X as X ∈ R(B×T )×C×H×W , then split X into P non-overlapped patches. The size of each patch is M ×M , and P is (H ×W )/M2. A linear layer is employed to change the channels of each patch, after which the shape of the embedding is V ∈ R(B×T )×C′× HM×WM , where C ′ represents channel dimension after the linear layer. After that, we will flatten embedding V along spatial dimensions and transpose the last two dimensions, resulting in the shape of embedding V ∈ R(B×T )×P×C′ . Classification token. After converting X into patch embedding V , we initialize a classification token Vcls ∈ R1×1×C ′ as 0, and repeat the classification token Vcls among the first dimension of V , i.e., Vcls ∈ R(B×T )×1×C ′ . Position encoding. In this step, spatial position embedding is firstly added into classification token Vcls. This operation is formulated in equation (2), where Ps ∈R1×(1+P )×C ′ denotes spatial position embedding. In equation (1), P clss ∈R1×1×C ′ and PVs ∈R1×P×C ′ are used to update classification token Vcls and token V respectively, Concat means concatenation operation. Ps = Concat[P cls s , P V s ] (1) Vcls = Vcls + P cls s (2) Through equation (2), spatial position information can be combined with classification token Vcls. Since video clips contain temporal correlations, we also introduce temporal position embedding Pt ∈ RT×1×C ′ together with V and Ps. Spatial and temporal position information are fused into patch embedding V through equation (3). Finally, as shown in equation (4), classification token Vcls will be appended into patch embedding V to form V0 ∈ R(B×T )×(1+P )×C ′ , and V0 will be fed into cross-stage transformer as input embedding sequence. V = V + PVs + Pt (3) V0 = Concat[Vcls, V ] (4) Cross-stage structure. Our proposed method consists of several spatial transformer blocks (STBs) and temporal transformer Blocks (TTBs). STB/TTB consists of of layer normalisation (LN) (Ba et al. (2016)), multi-head spatial self-attention (MSSA)/multi-head temporal self-attention (MTSA) and MLP blocks. MSSA is used to compute self-attention of spatial patches within each frame to handle the relationship between objects and scenes, which is similar to MSA in ViT (Dosovitskiy et al. (2020)). While MTSA mainly focuses on computing self-attention of co-located patches along temporal dimension, so that temporal relationships of frames can be captured. Note that input shapes for MSSA and MTSA will be reshaped as R(B×T )×(1+P )×C′ and R(B×(1+P ))×T×C′ respectively. The operations of STB and TTB are formulated in equation (5) and (6), where L represents the total blocks of cross-stage transformer. Vi is the output of ith transformer block. MSA represents multi-head self-attention process, which covers MSSA for spatial transformer blocks and MTSA for temporal transformer blocks. And MLP contains two linear layers with a GELU non-linearity. V ′i =MSA(LN(Vi−1)) + Vi−1, i = 1, ..., L (5) Vi =MLP (LN(V ′ i )) + V ′ i , i = 1, ..., L (6) Features from different spatial/temporal transformer blocks will then go through FAM for crossstage fusion, as described in equation (7), where Y is the aggregated feature. The details of crossstage self-attention and feature aggregation will be clarified in Sec. 3.2 and Sec. 3.3. Y = FAM(Vi), i = 1, ..., L (7) Figure 3: The illustration of cross-stage feature aggregation module (FAM). MLPs in STB and TTB. MLPs in ViT (Dosovitskiy et al. (2020)) usually contain two fully connected (fc) layers. Let d denotes the dimension of input feature. The first fc layer will expand the dimension d into 4 × d. Our STB follows this style, while TTB keeps the original dimension. We find that this design can achieve better accuracy and computation trade-off. Experiments of different configurations are summarized in table 1d. In the second fc layer, the dimension is changed back to d. MLP head for classification. Finally, the aggregated feature Y from FAM will go through a MLP head consisting of a LN layer and a linear layer for final video class prediction. 3.2 CROSS-STAGE SELF-ATTENTION The proposed cross-stage self-attention (CSSA) approach is simple yet efficient. The purpose of this design is to progressively fuse the self-attention from different stages to achieve better attention maps. As shown in Figure 2, the self-attention map from each STB/TTB will firstly perform an element-wise multiplication with a corresponding learnable ratio α, which can dynamically adjust the scale of corresponding self-attention. Then the scaled self-attention will be added to the selfattention from the next stage. The whole process can be defined as Equation 8: CrossAi = Softmax(Ai + αi ·Ai−1), i = 1, ..., L (8) where CrossAi and Ai represent cross-stage self-attention and self-attention of ith transformer block respectively. When i equals 1, cross-stage self-attention is original self-attention A1. αi is the learnable ratio of ith transformer block and (·) is element-wise dot product. Note that Ai is the pairwise similarity derived by the multiplication of query matrix and key matrix. CrossAi is then used to multiply with value matrix as output. Our experiments demonstrate the effectiveness of this module in both objective and subjective measurements. 3.3 CROSS-STAGE FEATURE AGGREGATION Cross-stage feature aggregation module (FAM) provides a global path for the features from different stages to better capture contextual information. The details of FAM are shown in Figure 3. We only use 4 transformer blocks for illustration. Specially, the feature from each transformer block will multiply a corresponding learnable parameter with an element-wise dot product. This parameter can scale its input feature globally. The scaled output will then be fed into a norm layer. Here we use LN layer for normalization. After that, all normalized results will be fused together for the output of FAM. It’s noteworthy that our proposed cross-stage transformer block can be easily implemented by introducing only a few additional parameters, which is negligible in complexity. The fusion process is as follows: Y = ∑ LN(βi · Vi) + VL, i = 1, ..., L− 1 (9) where βi is the learnable ratio for ith transformer block and Y is the aggregated feature. 4 EXPERIMENTS In this section, we clarify relevant experimental settings and evaluate our proposed approach in several datasets to validate its effectiveness. We introduce the datasets for evaluation in Sec. 4.1. Then we show the implementation details of our approach in Sec. 4.2. Extensive ablation studies are conducted for fully understanding the proposed approach in Sec. 4.3. In Sec. 4.4, we visualize the self-attention maps from our approach and the baseline to better understand the efficiency of cross-stage transformer. Finally, we compare our approach with other state-of-the-art methods in Sec. 4.5. 4.1 DATASETS We evaluate our approach on two large-scale video action recognition datasets, i.e., Kinetics-400 (Kay et al. (2017)) and Kinetics-600 (Carreira et al. (2018)). The details of the datasets are described below. Kinetics-400 dataset. The kinetics-400 dataset consists of training, validation and testing splits. Specifically, it contains 246536 training videos and 19761 validation videos, there are 400 human action categories, which are extracted from original YouTube videos. However, due to expired of Youtube links, there are only 234584 videos of training split. Videos in Kinetics are relatively longer and more complex, which are trimmed to around 10 seconds. Kinetics-600 dataset. The kinetics-600 dataset follows the same style of Kinetics-400 dataset, except that it extend 400 categories into 600, and the training split consists of 366,016 videos. We also use training and validation splits for training model and evaluation. 4.2 IMPLEMENTATION DETAILS Network structure. For all experiments, we adopt ”Base” architecture of ViT model (Dosovitskiy et al. (2020)) with temporal extension as our baseline, which is trained in ImageNet dataset (Krizhevsky et al. (2012)). For fair comparison, we only include the approaches using the same pre-training dataset (i.e., ImageNet-21K (Deng et al. (2009))). The structure of transformer layers in ViT-Base is the same as STB in CSTransformer network. We vary the numbers of STB and TTB, then evaluate these variants to showcase the impact of layers on our design. Top-1 accuracy of these variants are reported in table 1a, i.e., CSTransformer-V1, CSTransformer-V2 and CSTransformerV3, which will be explained in details in ablation study part. Data processing. In our experiments, we sample 8, 16 and 32 frames with temporal stride of 32, 16 and 8 respectively as input clips. The sampled input clips will be processed by color normalization, random scale jittering and uniform crop. The scale jittering range is [256, 320] and the uniform crop will slice frames into 3 spatial crops (top left, center and bottom right) of size 224 × 224. The patch size is 16 × 16. Training details. For all experiments, we use 8 × NVIDIA V100 devices. Initial learning rate is 0.005, and total epoch is 18. We use SGD optimizer with weight decay of 10−4 and momentum of 0.9 for training. Learning rate drops 10 times at epoch 5, 14 and 16. Inference settings. Whereas most existing methods use 10 temporal clips with 3 spatial crops (top-left, center and bottom-right) for inference, we only use 1 temporal clip (which is sampled in the middle of video clips) with 3 spatial crops for default setting. The final prediction is averaged softmax scores of all predictions. 4.3 ABLATION STUDIES In this section, we conduct various ablation studies on Kinetics-400, which can allow us to better understand different components’ effects for CSTransformer. Cross-stage transformer. In table 2b, we report the ablation study of the main components in crossstage transformer, i.e., cross-stage self-attention (CSSA) and feature aggregation module (FAM). We also show the result of baseline, which employs separable spatial and temporal transformer structure as depicted in Figure 1 (b) without cross-stage operations. From the table, we can see that both CSSA and FAM help improve the performance. When using them together, i.e., the whole crossstage transformer, the performance can be boosted from 77.8% to 78.7%. Model variants. We stack different number of STB and TTB to form CSTransformer, i.e., CSTransformer-V1, CSTransformer-V2 and CSTransformer-V3. The length of the input clips is 8. The detailed comparisons of various settings are shown in table 1a. Since CSTransformer-V2 structure has obtained optimal accuracy and computation trade-off, we employ it in other experiments. Does positional encoding help? In order to further understand the performance of spatial and temporal position encoding for CSTransformer. We evaluate CSTransformer-V2 with input clip length of 8. The results of using position encoding can be seen in table 1b. We can observe that by adding spatial position embedding, model’s performance has been improved from 77.6% to 78.4%. And introducing temporal embedding can further boost its performance into 78.7%. The effect of input clip length. Different clip lengths can impact the performance of proposed approach. We compare three clip lengths, i.e., 8, 16 and 32. The performance of CSTransformerV2 is illustrated in table 1c. A reasonable result can be observed that model’s performance increases as clip length becomes larger. MLP in TTB. As mentioned before, in TTB, the first fc layer in MLP doesn’t expand the original dimension, which is different from original ViT (Dosovitskiy et al. (2020)) design. We compare the performance of several expansion ratios, including 1 ×, 2 × and 4 × in table 1d, and find that expanding the dimension in cross-stage transformer doesn’t help. Therefore we keep the original dimension for TTB as in ViT. How does inference views influence performance? In video learning experiments, one needs to sample x × y video clips for evaluation, where x and y denotes the number of temporal clips and spatial crops. We wonder how does this inference sampling strategy influence the model’s performance. Therefore, we perform multiple inference settings including 1× 3, 4× 3 and 10× 3. The results of CSTransformer-V2, which is trained with input clip length of 8, is shown in table 2a. 4.4 VISUALIZATION To intuitively understand the proposed method, we visualize the self-attention map of cross-stage transformer in this section. Figure 4 shows the visualization for cross-stage transformer and the baseline in video clips from Kinetics-400. We can observe that the proposed approach can pay more attention to those areas such as hands and bee box, which are very important to understand the video contents. And it’s also interesting to see that our approach shows much less attentions in nonrelevant regions such as the background. We conjecture that cross-stage self-attention and feature aggregation can propagate important semantic information across different transformer blocks. As a result, those attentions for important areas can be gradually evolved and highlighted. 4.5 COMPARISON WITH THE STATE-OF-THE-ART In this section, we compare our method with several state-of-the-art approaches in terms of accuracy metrics and inference costs with total number of spatial and temporal views. We employ the input frame length of 32 in our evaluations. For fair comparison, we only report transformer methods’ results using the same pre-training dataset, i.e., ImageNet-21K. Moreover, since our method is implemented based on ViT structure, we only compare the video transformers based on ViT, i.e., VTN (Neimark et al. (2021)), ViViT (Arnab et al. (2021)) and TimeSformer (Bertasius et al. (2021)), to demonstrate the effectiveness of our method more clearly. There are also other video transformers (Liu et al. (2021); Fan et al. (2021)) which are much different from original ViT structure and report high performances. We will add our approach on these frameworks for comparison in the future. Note that ViT-L-ViViT with crop size 320 × 320 (the total inference cost is 3992 GFLOPs × 4 × 3 ≈ 47.9 TFLOPs) is compared in our experiment. Kinetics-400 dataset. The comparison results on Kinetics-400 are shown in table 3. In addition to accuracy metrics, we also report inference views and inference cost in terms of TFLOPs. When the inference view is 1 × 3, our approach achieves 81.2% top-1 accuracy and 94.8% top-5 accuracy. With 4 × 3 views, CSTransformer outperforms existing CNN and ViT based transformer approaches. Our approach achieves comparable performance with ViT-L-ViViT by only 8.6 % of its inference cost, since we use less views (1×3 vs. 4×3) and layers (ViT-Base vs. ViT-Large). Kinetics-600 dataset. We also evaluate our proposed approach on Kinetics-600. The results are shown in table 4. CSTransformer network achieves superior performance as well. Furthermore, Our approach consumes much less inference cost than other ViT based transformers (Bertasius et al. (2021); Arnab et al. (2021)) under the same inference views. 5 CONCLUSION In this paper, we propose a novel cross-stage transformer network for video learning, which can effectively learn video representations. In specific, we design a CSTransformer block which consists of cross-stage self-attention module (CSSA) and cross-stage feature aggregation module (FAM). We then build up a separable CSTransformer network, in which spatial CSTransformer blocks and temporal CSTransformer blocks are sequentially stacked. Extensive experiments show that our approach outperforms existing state-of-the-art CNN and ViT based transformer methods on video action recognition tasks. Due to the effectiveness of CSTransformer block, our method can achieve comparable performance to ViViT with much fewer inputs and FLOPs in inference process. Since our proposed CSSA and FCM act as independent modules, they can also be added on other video transformer frameworks. A APPENDIX A.1 CROSS-STAGE SELF-ATTENTION In this section, we further clarify the principles of proposed cross-stage self-attention. Mainsteam multi-head self-attention is proposed in Vaswani et al. (2017), which has been adopted in transformer network. We can formulate the process as follows. MultiHead(Q,K, V ) = Concat(head1, ..., headh)W O, (10) where headj = Attention(QW Q j ,KW K j , V W V j )(1 ≤ j ≤ h). h is the number of total heads. Q, K and V mean query, key and value matrices respectively. WO is the linear projection for concatenation of multiple heads’ outputs. WQj , W K j , W V j are the linear projections of key, query and value matrices for jth head. Then attention function can be written as: Attention(Q̂, K̂, V̂ ) = Softmax( Q̂K̂T√ dk )V̂ , (11) in which Q̂, K̂ and V̂ mean converted query, key, value matrices by linear projection. dk denotes dimension of input Q̂ and K̂ matrices. The attention weight Q̂K̂ T √ dk is the pairwise similarity between query and key matrices, which has been forwarded progressively in proposed CSTransformer structure. Cross MultiHead(Q,K, V ) = Concat(c head1, ..., c headh)W O. (12) In equation (12), c headj = Cross Attention(QW Q j ,KW K j , V W V j ). Cross-stage self-attention of ith (1 ≤ i ≤ n) transformer block is formulated in equation (13) and (14). n denotes total number of transformer blocks. Cross Attention(Q̂i, K̂i, V̂i) = Softmax(Ai + αi ∗Ai−1)V̂i, (13) Ai = Q̂iK̂i T √ dk , (14) where Q̂i, K̂i, V̂i are linearly projected query, key and value matrices of ith transformer block. ∗ means element-wise dot operation. ai represents a learnable ratio of ith block. We adopt multi-head cross-stage self-attention, namely Cross MultiHead(Q,K, V ), for self-attention output. Note that Ai should have the same shape with Ai−1, otherwise, we will use MultiHead(Q,K, V ) as output. A0 = 0. A.2 CSTRANSFORMER STRUCTURE To be more clear, we exlain details of CSTransformer structure. We adopt ”ViT-Base” (Dosovitskiy et al. (2020)) as our baseline. Detailed settings of ”CSTransformer-V1”, ”CSTransformer-V2” and ”CSTransformer-V3” are shown in table 5. The embedding dimension is 768; The head number is 12; The MLP sizes of STB and TTB are 3072 and 768 respectively. A.3 MORE EXPERIMENTAL ANALYSIS Here, we provide more experimental analysis and insights. The default dataset is Kinetics-400 (Kay et al. (2017)). In order to further analyze the influence caused by proposed cross-stage self-attention and features. We also show the comparison results between baseline and CSTransformer, which can be seen in figure 5. ”Baseline-V2” has the same structure of ”CSTransformer-V2”, except that it doesn’t adopt cross-stage self-attention and features. Note that in left figure, we report top-1 accuracy on validation dataset in training process, and we only sample one clip for inference in different epochs. In right figure, we test the models with the view of (1 × 3) after training. As we can see, CSTransformer structure can consistently achieve higher performance than baseline when training epoch increases. Furthermore, even with different input clip lengths, CSTransformer structure also performs better than baseline model. A.4 MORE VISUALIZATION RESULTS In this section, we provide more self-attention maps for visualization. Sampled frame clips are all from Kinetics-400 dataset. Visualization for comparison. The choosed models are ”Baseline-V2” and ”CSTransformerV2”. As shown in Figure 6 and 7. The 1st row represents original frame clips. The 2nd and 3rd rows mean self-attention maps of ”Baseline-V2” and ”CSTransformer-V2”. Note that brighter areas mean that more attention has been focused on. We can clearly observe that self-attention maps from CSTransformer structure can more focus on important objects and motion areas. However, self-attention maps from baseline may focus on some irrelevant regions. Self-attention maps of CSTransformer. We show original video clips and their self-attention maps from proposed CSTransformer-V2 in figure 8, 9, and 10. The 1st and 2nd rows of each figure are original frame clips and self-attention maps of ”CSTransformer-V2” respectively.
1. What is the main contribution of the paper, and how does it differ from previous video transformer models such as ViVit and TimeSformer? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and performance improvement compared to other works? 3. How does the reviewer assess the clarity and quality of the paper's content, including the explanation of the core idea and the final output formation? 4. What additional evidence or experiments could support the claim of the paper, especially regarding the layout of the network and the effectiveness of cross-stage self-attention? 5. How does the reviewer evaluate the computational cost comparison between the proposed method and other approaches, and what suggestions do they have for improving this aspect of the paper? 6. Are there any suggestions for additional experiments or evaluations that the author could perform to further validate their claims and improve the overall impact of the paper?
Summary Of The Paper Review
Summary Of The Paper In this paper, the author introduces a new video transformer, which is consisted of STBs (spatial transformer blocks) and TTBs (temporal transformer blocks). During the transition from one STB (TTB) to its subsequent STB (TTB), a cross-stage self-attention is introduced, which is to assign the self-attention from previous stage to the current stage by a learnable ratio. To formulate the final representation, another learnable ratio is introduced to the output of each stage and the final representation is the weighted average of each stage's output. At last, the model is evaluated in Kinetics-400 and 600 datasets. Review Strengths: The entire paper is well and clearly written. The core idea is easy to follow and understand. Weaknesses: The proposed algorithm is lack of novelty. Compared to the previous video transformer (ViVit, TimeSformer), the difference of this work is: 1) stacked a set of video transformer with spatial attention and followed by another set of video transformer with temporal attention, while, the TimesFormer with divided space-time attention alternately adopt spatial and temporal attention. 2) this work introduce additional residual connection (with learnable ratio) between attention block from adjacent transformer blocks. 3) the final output is weighted average of the output from each transformer block while the TimeSformer take the output of last block as the final representation. This paper would need more evidence to support its claim. For example, it's not clear why we should have the current layout of the network. How a set of spatial transformer block followed by a set of temporal transformer block is better than alternating design? considering the cross-stage self-attention could work in both cases. The improvement of performance is minor. Take the ViVit as the baseline. If we are going to compare between the best performance between ViViT and this paper, the improvement is minor (81.3% from ViViT vs. 81.8% from this paper). If we are going to compare the performance using the same input resolution (16x224x224), the performance of this paper is still on par with ViViT (80.6% from ViViT vs. 80.1% from this paper). In terms of the computational cost, it is wrong to compare GFLOPs between two methods with different inference views. As the performance of any method will not be linearly increased with number of inference views, it is not fair to put 4x3 views for ViViT and 1x3 views for this paper. As a result, the number of 8.6% does not make too much sense. At last, I would suggest to include MViT[1] as one of the reference, which is the SOTA video transformer so far. The proposed work should be evaluated in at least one or two more video benchmarks. The nature of K400 and K600 is the same. I would suggest also include Something-Something-V2 or Epic-Kitchen datasets. [1] Multiscale vision transformers, ICCV 2021
ICLR
Title Cross-Stage Transformer for Video Learning Abstract Transformer network has been proved efficient in modeling long-range dependencies in video learning. However, videos contain rich contextual information in both spatial and temporal dimensions, e.g., scenes and temporal reasoning. In traditional transformer networks, stacked transformer blocks work in a sequential and independent way, which may lead to the inefficient propagation of such contextual information. To address this problem, we propose a cross-stage transformer paradigm, which allows to fuse self-attentions and features from different blocks. By inserting the proposed cross-stage mechanism in existing spatial and temporal transformer blocks, we build a separable transformer network for video learning based on ViT structure, in which self-attentions and features are progressively aggregated from one block to the next. Extensive experiments show that our approach outperforms existing ViT based video transformer approaches with the same pre-training dataset on mainstream video action recognition datasets of Kinetics-400 (Top-1 accuracy 81.8%) and Kinetics-600 (Top-1 accuracy 84.0%). Due to the effectiveness of cross-stage transformer, our proposed method achieves comparable performance with other ViT based approaches with much lower computation cost (e.g., 8.6% of ViViT’s FLOPs) in inference process. As an independent module, our proposed method can be conveniently added on other video transformer frameworks. 1 INTRODUCTION Convolution neural network (CNN) has been successfully applied on computer vision tasks, such as classification (Krizhevsky et al. (2012); He et al. (2016)), detection (Girshick (2015); Ren et al. (2015)) and segmentation (He et al. (2017)). However, due to the limited receptive field, CNN lacks the ability of modeling long-range dependencies, which is an obstacle to capture the spatial and temporal contexts in video learning. To overcome this weakness, self-attention mechanism is introduced into CNN structure and obtains excellent performance (Wang et al. (2018); Guo et al. (2021)). Recently, convolution-free transformer structure consisting of self-attention layers (Vaswani et al. (2017)) is also investigated in vision domain (Dosovitskiy et al. (2020); Carion et al. (2020)). Transformer achieved extreme success in natural language processing (NLP) (Vaswani et al. (2017); Devlin et al. (2018); Yang et al. (2019); Dai et al. (2019)). The inherent similar requirement between video and language learning, i.e., capturing the long-range contextual information, makes people believe that it can also work for video tasks. The first attempt to apply pure transformer network for vision is Vision Transformer (ViT) (Dosovitskiy et al. (2020)), which aims at image classification. The input images are split into several patches, which are then linearly embedded into tokens for the transformer blocks. A classification head is attached at the top of these transformer blocks for final prediction. Bertasius et al. (Bertasius et al. (2021)) and Arnab et al. (Arnab et al. (2021)) extend the scheme to video learning by adding temporal transformer blocks. Pure transformer network shows comparable performance with CNN based methods, as well as the potential in vision domain. However, there are still uncertainties by processing video data in the way analogous to language. On one hand, video patches contain rich spatial and temporal contents, so that it is difficult to map them into precise semantic tokens like words. Thus, the correlations established by transformer blocks may lead to ambiguous semantics. This drawback becomes even worse for videos with complex scenes and actions. On the other hand, the absence of convolutions in a transformer network will damage local contexts capturing, so that the features built across transformer blocks may have inefficient information propagation. To tackle the aforementioned problems, we try to re-design the transformer blocks. Inspired by the empirically long-standing principle in CNN based approaches, i.e., features extracted from different stages can be fused together to improve learning (Lin et al. (2017a); He et al. (2017); Lin et al. (2017b); Redmon & Farhadi (2018)), we expect the cross-stage fusion can also help improve the performance of transformer. Based on above analysis, we propose a novel cross-stage transformer block which consists of crossstage self-attention (CSSA) and cross-stage feature aggregation module (FAM). The former aims to progressively enhance the self-attention maps by adding shortcuts between self-attentions from two consecutive transformer blocks. The later fuses the features from different stages to achieve better outputs. We then build up a separable spatial-temporal transformer network, in which spatial crossstage transformer and temporal cross-stage transformer are sequentially stacked. Extensive experiments show that, under the same conditions, i.e., base transformer structure and pre-training dataset, our approach outperforms existing ViT based video transformers on video action recognition tasks. Due to the effectiveness of cross-stage fusion, our method can achieve comparable performance to ViViT (Arnab et al. (2021)) with much fewer FLOPs in inference process. As a generic module, the cross-stage transformer can also be inserted into other transformer based frameworks. The contributions of this work can be summarized as follows: 1. A novel cross-stage transformer block, consisting of cross-stage self-attention module and cross-stage feature aggregation module, is proposed. Meanwhile, we also establish a separable cross-stage transformer network for video learning. 2. Extensive experiments are conducted to provide sufficient information for better understanding our approach, thereby provide an insight into the design of transformer in video learning. 3. Using the same pre-training dataset as existing transformer methods, our approach outperforms other ViT based video transformers and CNN methods on video action recognition datasets, including Kinetics-400 and Kinetics-600. It can also be added in other frameworks to promote the performance. 2 RELATED WORK Video action recognition. Extensive efforts have been put on video action recognition in recent years. The mainstream approaches usually utilize 2D or 3D based CNN for video feature extraction (Carreira & Zisserman (2017); Christoph & Pinz (2016); Tran et al. (2015); Ji et al. (2012); Tran et al. (2018); Simonyan & Zisserman (2014); Wang et al. (2016)). I3D (Carreira & Zisserman (2017)) is a representative of 3D based methods, which inflates 2D convolution layers into 3D to save the huge computational cost in pre-training 3D networks. Non-Local Neural Networks (Wang et al. (2018)) introduces self-attention into CNN, which can capture long-range dependencies and richer information of input video frames. Guo et al. (2021) proposes a separable self-attention network and achieve excellent performance on video action recognition. SlowFast (Feichtenhofer et al. (2019)) proposes a two-pathway network, using slow and fast temporal rates of video frames at the same time, in which features are fused from fast pathway into slow one. X3D (Feichtenhofer (2020)) explores different network settings based on SlowFast, and significantly boosts the performance. Recently, the research efforts are shifting to transformer based methods. Image transformer networks. Self-attention network (Vaswani et al. (2017)), also known as transformer, has achieved state-of-the-art performance in NLP domain (Vaswani et al. (2017); Devlin et al. (2018); Yang et al. (2019); Dai et al. (2019)). This success inspires more and more research efforts on applying transformer to computer vision tasks. ViT (Dosovitskiy et al. (2020)) and DeiT (Touvron et al. (2020)) successfully show that pure transformer network can achieve state-of-the-art performance in image classification task. In Carion et al. (2020), a transformer-based network is proposed for object detection, and obtains comparative performance with Faster-RCNN (Ren et al. (2015)). SETR (Zheng et al. (2020)) proposes segmentation transformer network, which achieves desirable performance in semantic segmentation. Wu et al. (2021); Li et al. (2021) incorporated convolution design into transformer network by adding locally inductive biases. Swin Transformer (Liu et al. (2021)) proposes a hierarchical transformer structure to flexibly model feature representation at various scales. These work showcases the potential of transformer in vision domain. Video transformer networks. With the achievements of transformer in image domain, there also appears transformer networks for video. VTN (Neimark et al. (2021)) proposed a generic framework for video recognition, which consists of a 2D spatial backbone for feature extraction, a temporal attention-based encoder for modeling temporal dependencies of the spatial features, and a MLP head for classification. Timesformer (Bertasius et al. (2021)) adapted image transformer (Dosovitskiy et al. (2020)) architecture to video, and proposed several different self-attention schemes for transformer network design. STAM (Sharir et al. (2021)) presented a spatial-temporal transformer network, which processes sampled frames by a spatial transformer and a temporal transformer sequentially. ViViT (Arnab et al. (2021)) also proposed a pure-transformer architecture for video classification, and developed several variants, which can separate transformer’s self-attention along spatial and temporal dimensions. There are also some works on adding shortcuts between transformer blocks in NLP and image domains to evolve the features (Wang et al. (2021); He et al. (2020)). Our work is inspired by these works but more challenging. Since video learning need to capture more complex information from spatial and temporal dimensions, simple shortcuts cannot efficiently work in existing video transformers. 3 PROPOSED METHOD In Sec. 3.1, we introduce the video learning process of cross-stage transformer network. Then in Sec. 3.2, we explain the proposed cross-stage self-attention (CSSA) in details. Finally in Sec. 3.3, the cross-stage feature aggregation module (FAM) is described. 3.1 CROSS-STAGE TRANSFORMER NETWORK The cross-stage transformer (CSTransformer) network is illustrated in Figure 1. We explain each component of the workflow as follows. Input video clips. We employ ViT (Dosovitskiy et al. (2020)) as our baseline by extending transformer blocks to temporal dimension. Then we build up video learning network by adding crossstage attention and feature fusion. Let X ∈ RB×C×T×H×W be the input video clip. B denotes batch size. C denotes the number of input channels. T represents the length of the clip. W and H denote width and height of input frames respectively. We use constantW andH in our experiments. Patch embedding. In order to convert input frames into spatial patches, we firstly reshape X as X ∈ R(B×T )×C×H×W , then split X into P non-overlapped patches. The size of each patch is M ×M , and P is (H ×W )/M2. A linear layer is employed to change the channels of each patch, after which the shape of the embedding is V ∈ R(B×T )×C′× HM×WM , where C ′ represents channel dimension after the linear layer. After that, we will flatten embedding V along spatial dimensions and transpose the last two dimensions, resulting in the shape of embedding V ∈ R(B×T )×P×C′ . Classification token. After converting X into patch embedding V , we initialize a classification token Vcls ∈ R1×1×C ′ as 0, and repeat the classification token Vcls among the first dimension of V , i.e., Vcls ∈ R(B×T )×1×C ′ . Position encoding. In this step, spatial position embedding is firstly added into classification token Vcls. This operation is formulated in equation (2), where Ps ∈R1×(1+P )×C ′ denotes spatial position embedding. In equation (1), P clss ∈R1×1×C ′ and PVs ∈R1×P×C ′ are used to update classification token Vcls and token V respectively, Concat means concatenation operation. Ps = Concat[P cls s , P V s ] (1) Vcls = Vcls + P cls s (2) Through equation (2), spatial position information can be combined with classification token Vcls. Since video clips contain temporal correlations, we also introduce temporal position embedding Pt ∈ RT×1×C ′ together with V and Ps. Spatial and temporal position information are fused into patch embedding V through equation (3). Finally, as shown in equation (4), classification token Vcls will be appended into patch embedding V to form V0 ∈ R(B×T )×(1+P )×C ′ , and V0 will be fed into cross-stage transformer as input embedding sequence. V = V + PVs + Pt (3) V0 = Concat[Vcls, V ] (4) Cross-stage structure. Our proposed method consists of several spatial transformer blocks (STBs) and temporal transformer Blocks (TTBs). STB/TTB consists of of layer normalisation (LN) (Ba et al. (2016)), multi-head spatial self-attention (MSSA)/multi-head temporal self-attention (MTSA) and MLP blocks. MSSA is used to compute self-attention of spatial patches within each frame to handle the relationship between objects and scenes, which is similar to MSA in ViT (Dosovitskiy et al. (2020)). While MTSA mainly focuses on computing self-attention of co-located patches along temporal dimension, so that temporal relationships of frames can be captured. Note that input shapes for MSSA and MTSA will be reshaped as R(B×T )×(1+P )×C′ and R(B×(1+P ))×T×C′ respectively. The operations of STB and TTB are formulated in equation (5) and (6), where L represents the total blocks of cross-stage transformer. Vi is the output of ith transformer block. MSA represents multi-head self-attention process, which covers MSSA for spatial transformer blocks and MTSA for temporal transformer blocks. And MLP contains two linear layers with a GELU non-linearity. V ′i =MSA(LN(Vi−1)) + Vi−1, i = 1, ..., L (5) Vi =MLP (LN(V ′ i )) + V ′ i , i = 1, ..., L (6) Features from different spatial/temporal transformer blocks will then go through FAM for crossstage fusion, as described in equation (7), where Y is the aggregated feature. The details of crossstage self-attention and feature aggregation will be clarified in Sec. 3.2 and Sec. 3.3. Y = FAM(Vi), i = 1, ..., L (7) Figure 3: The illustration of cross-stage feature aggregation module (FAM). MLPs in STB and TTB. MLPs in ViT (Dosovitskiy et al. (2020)) usually contain two fully connected (fc) layers. Let d denotes the dimension of input feature. The first fc layer will expand the dimension d into 4 × d. Our STB follows this style, while TTB keeps the original dimension. We find that this design can achieve better accuracy and computation trade-off. Experiments of different configurations are summarized in table 1d. In the second fc layer, the dimension is changed back to d. MLP head for classification. Finally, the aggregated feature Y from FAM will go through a MLP head consisting of a LN layer and a linear layer for final video class prediction. 3.2 CROSS-STAGE SELF-ATTENTION The proposed cross-stage self-attention (CSSA) approach is simple yet efficient. The purpose of this design is to progressively fuse the self-attention from different stages to achieve better attention maps. As shown in Figure 2, the self-attention map from each STB/TTB will firstly perform an element-wise multiplication with a corresponding learnable ratio α, which can dynamically adjust the scale of corresponding self-attention. Then the scaled self-attention will be added to the selfattention from the next stage. The whole process can be defined as Equation 8: CrossAi = Softmax(Ai + αi ·Ai−1), i = 1, ..., L (8) where CrossAi and Ai represent cross-stage self-attention and self-attention of ith transformer block respectively. When i equals 1, cross-stage self-attention is original self-attention A1. αi is the learnable ratio of ith transformer block and (·) is element-wise dot product. Note that Ai is the pairwise similarity derived by the multiplication of query matrix and key matrix. CrossAi is then used to multiply with value matrix as output. Our experiments demonstrate the effectiveness of this module in both objective and subjective measurements. 3.3 CROSS-STAGE FEATURE AGGREGATION Cross-stage feature aggregation module (FAM) provides a global path for the features from different stages to better capture contextual information. The details of FAM are shown in Figure 3. We only use 4 transformer blocks for illustration. Specially, the feature from each transformer block will multiply a corresponding learnable parameter with an element-wise dot product. This parameter can scale its input feature globally. The scaled output will then be fed into a norm layer. Here we use LN layer for normalization. After that, all normalized results will be fused together for the output of FAM. It’s noteworthy that our proposed cross-stage transformer block can be easily implemented by introducing only a few additional parameters, which is negligible in complexity. The fusion process is as follows: Y = ∑ LN(βi · Vi) + VL, i = 1, ..., L− 1 (9) where βi is the learnable ratio for ith transformer block and Y is the aggregated feature. 4 EXPERIMENTS In this section, we clarify relevant experimental settings and evaluate our proposed approach in several datasets to validate its effectiveness. We introduce the datasets for evaluation in Sec. 4.1. Then we show the implementation details of our approach in Sec. 4.2. Extensive ablation studies are conducted for fully understanding the proposed approach in Sec. 4.3. In Sec. 4.4, we visualize the self-attention maps from our approach and the baseline to better understand the efficiency of cross-stage transformer. Finally, we compare our approach with other state-of-the-art methods in Sec. 4.5. 4.1 DATASETS We evaluate our approach on two large-scale video action recognition datasets, i.e., Kinetics-400 (Kay et al. (2017)) and Kinetics-600 (Carreira et al. (2018)). The details of the datasets are described below. Kinetics-400 dataset. The kinetics-400 dataset consists of training, validation and testing splits. Specifically, it contains 246536 training videos and 19761 validation videos, there are 400 human action categories, which are extracted from original YouTube videos. However, due to expired of Youtube links, there are only 234584 videos of training split. Videos in Kinetics are relatively longer and more complex, which are trimmed to around 10 seconds. Kinetics-600 dataset. The kinetics-600 dataset follows the same style of Kinetics-400 dataset, except that it extend 400 categories into 600, and the training split consists of 366,016 videos. We also use training and validation splits for training model and evaluation. 4.2 IMPLEMENTATION DETAILS Network structure. For all experiments, we adopt ”Base” architecture of ViT model (Dosovitskiy et al. (2020)) with temporal extension as our baseline, which is trained in ImageNet dataset (Krizhevsky et al. (2012)). For fair comparison, we only include the approaches using the same pre-training dataset (i.e., ImageNet-21K (Deng et al. (2009))). The structure of transformer layers in ViT-Base is the same as STB in CSTransformer network. We vary the numbers of STB and TTB, then evaluate these variants to showcase the impact of layers on our design. Top-1 accuracy of these variants are reported in table 1a, i.e., CSTransformer-V1, CSTransformer-V2 and CSTransformerV3, which will be explained in details in ablation study part. Data processing. In our experiments, we sample 8, 16 and 32 frames with temporal stride of 32, 16 and 8 respectively as input clips. The sampled input clips will be processed by color normalization, random scale jittering and uniform crop. The scale jittering range is [256, 320] and the uniform crop will slice frames into 3 spatial crops (top left, center and bottom right) of size 224 × 224. The patch size is 16 × 16. Training details. For all experiments, we use 8 × NVIDIA V100 devices. Initial learning rate is 0.005, and total epoch is 18. We use SGD optimizer with weight decay of 10−4 and momentum of 0.9 for training. Learning rate drops 10 times at epoch 5, 14 and 16. Inference settings. Whereas most existing methods use 10 temporal clips with 3 spatial crops (top-left, center and bottom-right) for inference, we only use 1 temporal clip (which is sampled in the middle of video clips) with 3 spatial crops for default setting. The final prediction is averaged softmax scores of all predictions. 4.3 ABLATION STUDIES In this section, we conduct various ablation studies on Kinetics-400, which can allow us to better understand different components’ effects for CSTransformer. Cross-stage transformer. In table 2b, we report the ablation study of the main components in crossstage transformer, i.e., cross-stage self-attention (CSSA) and feature aggregation module (FAM). We also show the result of baseline, which employs separable spatial and temporal transformer structure as depicted in Figure 1 (b) without cross-stage operations. From the table, we can see that both CSSA and FAM help improve the performance. When using them together, i.e., the whole crossstage transformer, the performance can be boosted from 77.8% to 78.7%. Model variants. We stack different number of STB and TTB to form CSTransformer, i.e., CSTransformer-V1, CSTransformer-V2 and CSTransformer-V3. The length of the input clips is 8. The detailed comparisons of various settings are shown in table 1a. Since CSTransformer-V2 structure has obtained optimal accuracy and computation trade-off, we employ it in other experiments. Does positional encoding help? In order to further understand the performance of spatial and temporal position encoding for CSTransformer. We evaluate CSTransformer-V2 with input clip length of 8. The results of using position encoding can be seen in table 1b. We can observe that by adding spatial position embedding, model’s performance has been improved from 77.6% to 78.4%. And introducing temporal embedding can further boost its performance into 78.7%. The effect of input clip length. Different clip lengths can impact the performance of proposed approach. We compare three clip lengths, i.e., 8, 16 and 32. The performance of CSTransformerV2 is illustrated in table 1c. A reasonable result can be observed that model’s performance increases as clip length becomes larger. MLP in TTB. As mentioned before, in TTB, the first fc layer in MLP doesn’t expand the original dimension, which is different from original ViT (Dosovitskiy et al. (2020)) design. We compare the performance of several expansion ratios, including 1 ×, 2 × and 4 × in table 1d, and find that expanding the dimension in cross-stage transformer doesn’t help. Therefore we keep the original dimension for TTB as in ViT. How does inference views influence performance? In video learning experiments, one needs to sample x × y video clips for evaluation, where x and y denotes the number of temporal clips and spatial crops. We wonder how does this inference sampling strategy influence the model’s performance. Therefore, we perform multiple inference settings including 1× 3, 4× 3 and 10× 3. The results of CSTransformer-V2, which is trained with input clip length of 8, is shown in table 2a. 4.4 VISUALIZATION To intuitively understand the proposed method, we visualize the self-attention map of cross-stage transformer in this section. Figure 4 shows the visualization for cross-stage transformer and the baseline in video clips from Kinetics-400. We can observe that the proposed approach can pay more attention to those areas such as hands and bee box, which are very important to understand the video contents. And it’s also interesting to see that our approach shows much less attentions in nonrelevant regions such as the background. We conjecture that cross-stage self-attention and feature aggregation can propagate important semantic information across different transformer blocks. As a result, those attentions for important areas can be gradually evolved and highlighted. 4.5 COMPARISON WITH THE STATE-OF-THE-ART In this section, we compare our method with several state-of-the-art approaches in terms of accuracy metrics and inference costs with total number of spatial and temporal views. We employ the input frame length of 32 in our evaluations. For fair comparison, we only report transformer methods’ results using the same pre-training dataset, i.e., ImageNet-21K. Moreover, since our method is implemented based on ViT structure, we only compare the video transformers based on ViT, i.e., VTN (Neimark et al. (2021)), ViViT (Arnab et al. (2021)) and TimeSformer (Bertasius et al. (2021)), to demonstrate the effectiveness of our method more clearly. There are also other video transformers (Liu et al. (2021); Fan et al. (2021)) which are much different from original ViT structure and report high performances. We will add our approach on these frameworks for comparison in the future. Note that ViT-L-ViViT with crop size 320 × 320 (the total inference cost is 3992 GFLOPs × 4 × 3 ≈ 47.9 TFLOPs) is compared in our experiment. Kinetics-400 dataset. The comparison results on Kinetics-400 are shown in table 3. In addition to accuracy metrics, we also report inference views and inference cost in terms of TFLOPs. When the inference view is 1 × 3, our approach achieves 81.2% top-1 accuracy and 94.8% top-5 accuracy. With 4 × 3 views, CSTransformer outperforms existing CNN and ViT based transformer approaches. Our approach achieves comparable performance with ViT-L-ViViT by only 8.6 % of its inference cost, since we use less views (1×3 vs. 4×3) and layers (ViT-Base vs. ViT-Large). Kinetics-600 dataset. We also evaluate our proposed approach on Kinetics-600. The results are shown in table 4. CSTransformer network achieves superior performance as well. Furthermore, Our approach consumes much less inference cost than other ViT based transformers (Bertasius et al. (2021); Arnab et al. (2021)) under the same inference views. 5 CONCLUSION In this paper, we propose a novel cross-stage transformer network for video learning, which can effectively learn video representations. In specific, we design a CSTransformer block which consists of cross-stage self-attention module (CSSA) and cross-stage feature aggregation module (FAM). We then build up a separable CSTransformer network, in which spatial CSTransformer blocks and temporal CSTransformer blocks are sequentially stacked. Extensive experiments show that our approach outperforms existing state-of-the-art CNN and ViT based transformer methods on video action recognition tasks. Due to the effectiveness of CSTransformer block, our method can achieve comparable performance to ViViT with much fewer inputs and FLOPs in inference process. Since our proposed CSSA and FCM act as independent modules, they can also be added on other video transformer frameworks. A APPENDIX A.1 CROSS-STAGE SELF-ATTENTION In this section, we further clarify the principles of proposed cross-stage self-attention. Mainsteam multi-head self-attention is proposed in Vaswani et al. (2017), which has been adopted in transformer network. We can formulate the process as follows. MultiHead(Q,K, V ) = Concat(head1, ..., headh)W O, (10) where headj = Attention(QW Q j ,KW K j , V W V j )(1 ≤ j ≤ h). h is the number of total heads. Q, K and V mean query, key and value matrices respectively. WO is the linear projection for concatenation of multiple heads’ outputs. WQj , W K j , W V j are the linear projections of key, query and value matrices for jth head. Then attention function can be written as: Attention(Q̂, K̂, V̂ ) = Softmax( Q̂K̂T√ dk )V̂ , (11) in which Q̂, K̂ and V̂ mean converted query, key, value matrices by linear projection. dk denotes dimension of input Q̂ and K̂ matrices. The attention weight Q̂K̂ T √ dk is the pairwise similarity between query and key matrices, which has been forwarded progressively in proposed CSTransformer structure. Cross MultiHead(Q,K, V ) = Concat(c head1, ..., c headh)W O. (12) In equation (12), c headj = Cross Attention(QW Q j ,KW K j , V W V j ). Cross-stage self-attention of ith (1 ≤ i ≤ n) transformer block is formulated in equation (13) and (14). n denotes total number of transformer blocks. Cross Attention(Q̂i, K̂i, V̂i) = Softmax(Ai + αi ∗Ai−1)V̂i, (13) Ai = Q̂iK̂i T √ dk , (14) where Q̂i, K̂i, V̂i are linearly projected query, key and value matrices of ith transformer block. ∗ means element-wise dot operation. ai represents a learnable ratio of ith block. We adopt multi-head cross-stage self-attention, namely Cross MultiHead(Q,K, V ), for self-attention output. Note that Ai should have the same shape with Ai−1, otherwise, we will use MultiHead(Q,K, V ) as output. A0 = 0. A.2 CSTRANSFORMER STRUCTURE To be more clear, we exlain details of CSTransformer structure. We adopt ”ViT-Base” (Dosovitskiy et al. (2020)) as our baseline. Detailed settings of ”CSTransformer-V1”, ”CSTransformer-V2” and ”CSTransformer-V3” are shown in table 5. The embedding dimension is 768; The head number is 12; The MLP sizes of STB and TTB are 3072 and 768 respectively. A.3 MORE EXPERIMENTAL ANALYSIS Here, we provide more experimental analysis and insights. The default dataset is Kinetics-400 (Kay et al. (2017)). In order to further analyze the influence caused by proposed cross-stage self-attention and features. We also show the comparison results between baseline and CSTransformer, which can be seen in figure 5. ”Baseline-V2” has the same structure of ”CSTransformer-V2”, except that it doesn’t adopt cross-stage self-attention and features. Note that in left figure, we report top-1 accuracy on validation dataset in training process, and we only sample one clip for inference in different epochs. In right figure, we test the models with the view of (1 × 3) after training. As we can see, CSTransformer structure can consistently achieve higher performance than baseline when training epoch increases. Furthermore, even with different input clip lengths, CSTransformer structure also performs better than baseline model. A.4 MORE VISUALIZATION RESULTS In this section, we provide more self-attention maps for visualization. Sampled frame clips are all from Kinetics-400 dataset. Visualization for comparison. The choosed models are ”Baseline-V2” and ”CSTransformerV2”. As shown in Figure 6 and 7. The 1st row represents original frame clips. The 2nd and 3rd rows mean self-attention maps of ”Baseline-V2” and ”CSTransformer-V2”. Note that brighter areas mean that more attention has been focused on. We can clearly observe that self-attention maps from CSTransformer structure can more focus on important objects and motion areas. However, self-attention maps from baseline may focus on some irrelevant regions. Self-attention maps of CSTransformer. We show original video clips and their self-attention maps from proposed CSTransformer-V2 in figure 8, 9, and 10. The 1st and 2nd rows of each figure are original frame clips and self-attention maps of ”CSTransformer-V2” respectively.
1. What is the main contribution of the paper, and how does it address the problem of modeling rich contextual information in videos? 2. What are the strengths and weaknesses of the proposed cross-stage transformer paradigm, particularly regarding its novelty and technical details? 3. Do you have any concerns or suggestions regarding the experimental results and comparisons with other state-of-the-art methods, such as MViT and VidTr? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a cross-stage transformer paradigm to fuse self-attentions and features from different blocks more efficiently and effectively. The proposed method is evaluated on several standard video benchmark datasets and shows similar or better performance than previous Transformer methods. Review Strengths: This paper is well-motivated since the problem of finding appropriate Transformer architectures for videos is still unsolved. It is worth investigating how to effectively and efficiently model rich contextual information in videos. Fusing multi-stage information is a new idea for video Transformer. The paper is well-written and easy to follow. The experimental results demonstrate the effectiveness of each component of the proposed method. Weaknesses: novelty concerns: (a) One of the main concerns is the novelty issue. One of the main contributions is the connection between different layers, which is similar to DenseNet [1]. The paper claims that “Since video learning needs to capture more complex information from spatial and temporal dimensions, simple shortcuts cannot efficiently work in existing video transformers”. However, there are no further analyses or ablation studies for this claim. It would be much better if the paper can show or prove that the proposed “cross-stage fusion” works better than simple shortcuts. Reference: [1] Gao Huang, et al. “Densely Connected Convolutional Networks”, CVPR, 2017. technical detail concerns: (a) Cross-stage transformer structure: Currently the structure is like [(STB)xM-(TTB)xN], where M and N represent the numbers of STB and TTB, respectively. Why not adopting the structure like [STB-TTB-STB-TTB-……] or some structures as shown in TimeSformer? It is more reasonable to modeling spatial and temporal information alternatively like TimeSformer. It would be great to provide more analyses or explanation for the choice of the structure. Moreover, it would be great to explain how to decide the choices of Transformer blocks in the paper. (b) Figure 2: The caption says, “For simplicity, we only show cross-stage self-attention flow of two consecutive transformer blocks”. Does that mean “cross-stage” is actually across more than two consecutive blocks? This part is a little bit confusing. experiment concerns: (a) Table 1: There are lots of ablation experiments for the model configuration. However, it is a little bit confusing about the selected configurations in the final comparison. (b) Figure 4: the difference between bright and dark regions are not very obvious (2nd and 3rd rows). Maybe showing visual attention maps could help. (c) State-of-the-art comparison: It would be great to compare MViT and VidTr [2] as well since they are both state of the arts for video Transformers. The results of MViT are as follows (which are missing in the paper): Kinetics-400: Top-1 / Top-5 / Views / TFLOPs 81.2 / 95.1 / 3×3 / 4.1 Kinetics-600: Top-1 / Top-5 / Views / TFLOPs 84.1 / 96.5 / 1×5 / 1.2 The paper only mentions that MViT is much different from the original ViT structure, but it is not a convincing reason for not comparing it. Moreover, MViT has competitive performance although it is trained from scratch. Therefore, more analyses are needed. (d) Parameter number should be another metric to compare efficiency. (e) Table 4: The numbers for TimeSformer-L are not correct. They should be corrected as follows: Top-1 / Top-5 / Views / TFLOPs 82.4 / 96.0 / 1x3 / 5.1 Reference: [2] Yanyi Zhang, et al. "VidTr: Video Transformer Without Convolutions", ICCV, 2021.
ICLR
Title D2KE: From Distance to Kernel and Embedding via Random Features For Structured Inputs Abstract We present a new methodology that constructs a family of positive definite kernels from any given dissimilarity measure on structured inputs whose elements are either real-valued time series or discrete structures such as strings, histograms, and graphs. Our approach, which we call D2KE (from Distance to Kernel and Embedding), draws from the literature of Random Features. However, instead of deriving random feature maps from a user-defined kernel to approximate kernel machines, we build a kernel from a random feature map, that we specify given the distance measure. We further propose use of a finite number of random objects to produce a random feature embedding of each instance. We provide a theoretical analysis showing that D2KE enjoys better generalizability than universal Nearest-Neighbor estimates. On one hand, D2KE subsumes the widely-used representative-set method as a special case, and relates to the well-known distance substitution kernel in a limiting case. On the other hand, D2KE generalizes existing Random Features methods applicable only to vector input representations to complex structured inputs of variable sizes. We conduct classification experiments over such disparate domains as time series, strings, and histograms (for texts and images), for which our proposed framework compares favorably to existing distance-based learning methods in terms of both testing accuracy and computational time. 1 Introduction In many problem domains, it is easier to specify a reasonable dissimilarity (or similarity) function between instances than to construct a feature representation. This is particularly the case with structured inputs whose elements are either real-valued time series or discrete structures such as strings, histograms, and graphs, where it is typically less than clear how to construct the representation of entire structured inputs with potentially widely varying sizes, even when given a good feature representation of each individual component. Moreover, even for complex structured inputs, there are many well-developed dissimilarity measures, such as the Dynamic Time Warping measure between time series, Edit Distance between strings, Hausdorff distance between sets, andWasserstein distance between distributions. However, standard machine learning methods are designed for vector representations, and classically there has been far less work on distance-based methods for either classification or regression on structured inputs. The most common distance-based method is Nearest-Neighbor Estimation (NNE), which predicts the outcome for an instance using an average of its nearest neighbors in the input space, with nearness measured by the given dissimilarity measure. Estimation from nearest neighbors, however, is unreliable, specifically having high variance when the neighbors are far apart, which is typically the case when the intrinsic dimension implied by the distance is large. To address this issue, a line of research has focused on developing global distance-based (or similaritybased) machine learning methods (Pkkalska & Duin, 2005; Duin & Pękalska, 2012; Balcan et al., 2008a; Cortes et al., 2012), in large part by drawing upon connections to kernel methods (Scholkopf et al., 1999) or directly learning with similarity functions (Balcan et al., 2008a; Cortes et al., 2012; Balcan et al., 2008b; Loosli et al., 2016); we refer the reader in particular to the survey in (Chen et al., 2009a). Among these, the most direct approach treats the data similarity matrix (or transformed dissimilarity matrix) as a kernel Gram matrix, and then uses standard kernel-based methods such as Support Vector Machines (SVM) or kernel ridge regression with this Gram matrix. A key caveat with this approach however is that most similarity (or dissimilarity) measures do not provide a positive-definite (PD) kernel, so that the empirical risk minimization problem is not well-defined, and moreover becomes non-convex (Ong et al., 2004; Lin & Lin, 2003). A line of work has therefore focused on estimating a positive-definite (PD) Gram matrix that merely approximates the similarity matrix. This could be achieved for instance by clipping, or flipping, or shifting eigenvalues of the similarity matrix (Pekalska et al., 2001), or explicitly learning a PD approximation of the similarity matrix (Chen &Ye, 2008; Chen et al., 2009b). Such modifications of the similaritymatrix however often leads to a loss of information; moreover, the enforced PD property is typically guaranteed to hold only on the training data, resulting in an inconsistency between the set of testing and training samples (Chen et al., 2009a) 1. Another common approach is to select a subset of training samples as a held-out representative set, and use distances or similarities to structured inputs in the set as the feature function (Graepel et al., 1999; Pekalska et al., 2001). As we will show, with proper scaling, this approach can be interpreted as a special instance of our framework. Furthermore, our framework provides a more general and richer family of kernels, many of which significantly outperform the representative-set method in a variety of application domains. To address the aforementioned issues, in this paper, we propose a novel general framework that constructs a family of PD kernels from a dissimilarity measure on structured inputs. Our approach, which we call D2KE (fromDistance to Kernel and Embedding), draws from the literature of Random Features (Rahimi & Recht, 2008), but instead of deriving feature maps from an existing kernel for approximating kernel machines, we build novel kernels from a random feature map specifically designed for a given distance measure. The kernel satisfies the property that functions in the corresponding Reproducing Kernel Hilbert Space (RKHS) are Lipschitz-continuous w.r.t. the given distance measure. We also provide a tractable estimator for a function from this RKHS which enjoys much better generalization properties than nearest-neighbor estimation. Our framework produces a feature embedding and consequently a vector representation of each instance that can be employed by any classification and regression models. In classification experiments in such disparate domains as strings, time series, and histograms (for texts and images), our proposed framework compares favorably to existing distance-based learning methods in terms of both testing accuracy and computational time, especially when the number of data samples is large and/or the size of structured inputs is large. We highlight our main contributions as follows: • From the perspective of distance kernel learning, we propose for the first time amethodology that constructs a family of PD kernels via Random Features from a given distance measure for structured inputs, and provide theoretical and empirical justifications for this framework. • From the perspective of Random Features (RF) methods, we generalize existing Random Features methods applied only to vector input representations to complex structured inputs of variable sizes. To the best of our knowledge, this is the first time that a generic RFmethod has been used to accelerate kernel machines on structured inputs across a broad range of domains such as time-series, strings, and the histograms. 2 Related Work Distance-Based Kernel Learning. Existing approaches either require strict conditions on the distance function (e.g. that the distance be isometric to the square of the Euclidean distance) (Haasdonk & Bahlmann, 2004; Schölkopf, 2001), or construct empirical PD Gram matrices that do not necessarily generalize to the test samples (Pekalska et al., 2001; Pkkalska & Duin, 2005; Pekalska & Duin, 2006; 2008; Duin & Pękalska, 2012). Haasdonk & Bahlmann (2004) and Schölkopf (2001) provide conditions under which one can obtain a PD kernel through simple transformations of the distance measure, but which are not satisfied for many commonly used dissimilarity measures such as Dynamic TimeWarping, Hausdorff distance, and EarthMover’s distance (Haasdonk &Bahlmann, 1A generalization error bound was provided for the similarity-as-kernel approach in (Chen et al., 2009a), but only for a positive-definite similarity function. 2004). Equivalently, one could also find a Euclidean embedding (also known as dissimilarity representation) approximating the dissimilarity matrix as in Multidimensional Scaling (Pekalska et al., 2001; Pkkalska & Duin, 2005; Pekalska & Duin, 2006; 2008; Duin & Pękalska, 2012) 2. Differently, Loosli et al. (2016) presented a theoretical foundation for an SVM solver in Krein spaces and directly evaluated a solution that uses the original (indefinite) similarity measure. There are also some specific approaches dedicated to building a PD kernel on some structured inputs such as text and time-series (Collins & Duffy, 2002; Cuturi, 2011), that modify a distance function over sequences to a kernel by replacing the minimization over possible alignments into a summation over all possible alignments. This type of kernel, however, results in a diagonal-dominance problem, where the diagonal entries of the kernel Gram matrix are orders of magnitude larger than the off-diagonal entries, due to the summation over a huge number of alignments with a sample itself. Random Features Methods. Interest in approximating non-linear kernel machines using randomized feature maps has surged in recent years due to a significant reduction in training and testing times for kernel based learning algorithms (Dai et al., 2014). There are numerous explicit nonlinear random feature maps that have been constructed for various types of kernels, including Gaussian and Laplacian Kernels (Rahimi & Recht, 2008; Wu et al., 2016), intersection kernels (Maji & Berg, 2009), additive kernels Vedaldi & Zisserman (2012), dot product kernels (Kar & Karnick, 2012; Pennington et al., 2015), and semigroup kernels (Mukuta et al., 2018). Among them, the Random Fourier Features (RFF) method, which approximates a Gaussian Kernel function by means of multiplying the input with a Gaussian random matrix, and its fruitful variants have been extensively studied both theoretically and empirically (Sriperumbudur & Szabó, 2015; Felix et al., 2016; Rudi & Rosasco, 2017; Bach, 2017; Choromanski et al., 2018). To accelerate the RFF on input data matrix with high dimensions, a number of methods have been proposed to leverage structured matrices to allow faster matrix computation and less memory consumption (Le et al., 2013; Hamid et al., 2014; Choromanski & Sindhwani, 2016). However, all the aforementioned RFmethods merely consider inputs with vector representations, and compute the RF by a linear transformation that is either a matrix multiplication or an inner product under Euclidean distance metric. In contrast, D2KE takes structured inputs of potentially different sizes and computes the RF with a structured distance metric (typically with dynamic programming or optimal transportation). Another important difference between D2KE and existing RF methods lies in the fact that existing RF work assumes a user-defined kernel and then derives a randomfeature map, while D2KE constructs a new PD kernel through a random feature map and makes it computationally feasible via RF. The table 1 lists the differences between D2KE and existing RF methods. A very recent piece of work (Wu et al., 2018) has developed a kernel and a specific algorithm for computing embeddings of single-variable real-valued time-series. However, despite promising results, this method cannot be applied on discrete structured inputs such as strings, histograms, and graphs. In contrast, we have an unified framework for various structured inputs beyond the limits of (Wu et al., 2018) and provide a general theoretical analysis w.r.t KNN and other generic distance-based kernel methods. 3 Problem Setup We consider the estimation of a target function f : X → R from a collection of samples {(xi, yi)}ni=1, where xi ∈ X is the structured input object, and yi ∈ Y is the output observation associated with the target function f (xi). For instance, in a regression problem, yi ∼ f (xi) + ωi ∈ R for some random noise ωi , and in binary classification, we have yi ∈ {0, 1} with P(yi = 1|xi) = f (xi). We are given a dissimilarity measure d : X ×X → R between input objects instead of a feature representation of x. 2A proof of the equivalence between PD of similarity matrix and Euclidean of dissimilarity matrix can be found in (Borg & Groenen, 1997). Note that the size structured inputs xi may vary widely, e.g. strings with variable lengths or graphs with different sizes. For some of the analyses, we require the dissimilarity measure to be a metric as follows. Assumption 1 (Distance Metric). d : X × X → R is a distance metric, that is, it satisfies (i) d(x1,x2) ≥ 0, (ii) d(x1,x2) = 0 ⇐⇒ x1 = x2, (iii) d(x1,x2) = d(x2,x1), and (iv) d(x1,x2) ≤ d(x1,x3) + d(x3,x2). 3.1 Function Continuity and Space Covering An ideal feature representation for the learning task is (i) compact and (ii) such that the target function f (x) is a simple (e.g. linear) function of the resulting representation. Similarly, an ideal dissimilarity measure d(x1,x2) for learning a target function f (x) should satisfy certain properties. On one hand, a small dissimilarity d(x1,x2) between two objects should imply small difference in the function values | f (x1) − f (x2)|. On the other hand, we want a small expected distance among samples, so that the data lies in a compact space of small intrinsic dimension. We next build up some definitions to formalize these properties. Assumption 2 (Lipschitz Continuity). For any x1,x2 ∈ X, there exists some constant L > 0 such that | f (x1) − f (x2)| ≤ L d(x1,x2), (1) We would prefer the target function to have a small Lipschitz-continuity constant L with respect to the dissimilarity measure d(., .). Such Lipschitz-continuity alone however might not suffice. For example, one can simply set d(x1,x2) = ∞ for any x1 , x2 to satisfy Eq. equation 1. We thus need the following quantity that measures the size of the space implied by a given dissimilarity measure. Definition 1 (Covering Number). Assuming d is a metric. A δ-cover of X w.r.t. d(., .) is a set E s.t. ∀x ∈ X, ∃xi ∈ E, d(x,xi) ≤ δ. Then the covering number N(δ;X, d) is the size of the smallest δ-cover for Xwith respect to d. Assuming the input domain X is compact, the covering number N(δ;X, d) measures its size w.r.t. the distance measure d. We show how the two quantities defined above affect the estimation error of a Nearest-Neighbor Estimator. 3.2 Effective Dimension and Nearest Neighbor Estimation We extend the standard analysis of the estimation error of k-nearest-neighbor from finite-dimensional vector spaces to any structured input space X, with an associated distance measure d, and a finite covering number N(δ;X, d), by defining the effective dimension as follows. Assumption 3 (Effective Dimension). Let the effective dimension pX,d > 0 be the minimum p satisfying ∃c > 0, ∀δ : 0 < δ < 1, N(δ;X, d) ≤ c ( 1 δ )p . Here we provide an example of effective dimension in case of measuring the space of Multiset. Multiset with Hausdorff Distance. A multiset is a set that allows duplicate elements. Consider two multisets x1 = {ui}Mi=1, x2 = {vj}Nj=1. Let ∆(ui, vj) be a ground distance that measures the distance between two elements ui, vj ∈ V in a set. The (modified) Hausdorff Distance (Dubuisson & Jain, 1994) can be defined as d(x1,x2) := max{ 1 N N∑ i=1 min j∈[M] ∆(ui, vj), 1 M M∑ j=1 min i∈[N ] ∆(vj,ui)} (2) Let N(δ;V,∆) be the covering number of V under the ground distance ∆. Let X denote the set of all sets of size bounded by L. By constructing a covering of X containing any set of size less or equal than L with its elements taken from the covering of V, we have N(δ;X, d) ≤ N(δ;V;∆)L . Therefore, pX,d ≤ L log N(δ;V,∆). For example, ifV := {v ∈ Rp | ‖v‖2 ≤ 1} and ∆ is Euclidean distance, we have N(δ;V,∆) = (1 + 2δ )p and pX,d ≤ Lp. Equippedwith the concept of effective dimension, we can obtain the following bound on the estimation error of the k-Nearest-Neighbor estimate of f (x). Theorem 1. LetVar(y | f (x)) ≤ σ2, and f̂n be the k-Nearest Neighbor estimate of the target function f constructed from a training set of size n. Denote p := pX,d . We have Ex [( f̂n(x) − f (x) )2] ≤ σ 2 k + cL2 ( k n )2/p for some constant c > 0. For σ > 0, minimizing RHS w.r.t. the parameter k, we have Ex [( f̂n(x) − f (x) )2] ≤ c2σ 4 p+2 L 2p 2+p ( 1 n ) 2 2+p (3) for some constant c2 > 0. Proof. The proof is almost the same to a standard analysis of k-NN’s estimation error in, for example, (Györfi et al., 2006), with the space partition number replaced by the covering number, and dimension replaced by the effective dimension in Assumption 3. When pX,d is reasonably large, the estimation error of k-NN decreases quite slowly with n. Thus, for the estimation error to be bounded by , requires the number of samples to scale exponentially in pX,d . In the following sections, we develop an estimator f̂ based on a RKHS derived from the distance measure, with a considerably better sample complexity for problems with higher effective dimension. 4 From Distance to Kernel for Structured Inputs We aim to address the long-standing problem of how to convert a distance measure into a positivedefinite kernel. Here we introduce a simple but effective approach D2KE that constructs a family of positive-definite kernels from a given distance measure. Given an structured input domain X and a distance measure d(., .), we construct a family of kernels as k(x, y) := ∫ p(ω)φω(x)φω(y)dω,where φω(x) := exp(−γd(x,ω)), (4) whereω ∈ Ω is a random structured object whose elements could be real-valued time-series, strings, and histograms, p(ω) is a distribution over Ω, and φω(x) is a feature map derived from the distance of x to all random objects ω ∈ Ω. The kernel is parameterized by both p(ω) and γ. Relationship toDistance SubstitutionKernel. An insightful interpretation of the kernel in Equation (4) can be obtained by expressing the kernel in Equation (4) as exp ( −γsoftminp(ω){d(x,ω) + d(ω, y)} ) (5) where the soft minimum function, parameterized by p(ω) and γ, is defined as softminp(ω) f (ω) := − 1 γ log ∫ p(ω)e−γ f (ω)dω. (6) Therefore, the kernel k(x, y) can be interpreted as a soft version of the distance substitution kernel (Haasdonk & Bahlmann, 2004), where instead of substituting d(x, y) into the exponent, it substitutes a soft version of the form softminp(ω){d(x,ω) + d(ω, y)}. (7) Note when γ → ∞, the value of Equation (7) is determined by minω∈Ω d(x,ω) + d(ω, y), which equals d(x, y) if X ⊆ Ω, since it cannot be smaller than d(x, y) by the triangle inequality. In other words, when X ⊆ Ω, k(x, y) → exp(−γd(x, y)) as γ →∞. On the other hand, unlike the distance-substituion kernel, our kernel in Equation (5) is always PD by construction. Algorithm 1 Random Feature Approximation of function in RKHS with the kernel in Equation 4 1: Draw R samples from p(ω) to get {ωj}Rj=1. 2: Set the R-dimensional feature embedding as φ̂j(x) = 1√ R exp(−γd(x,ωj)), ∀ j ∈ [R] 3: Solve the following problem for some µ > 0: ŵ := argmin w∈RR 1 n n∑ i=1 `(wT φ̂(xi), yi) + µ 2 ‖w‖2 4: Output the estimated function f̃R(x) := ŵT φ̂(x). Random Feature Approximation. The reader might have noticed that the kernel in Equation (4) cannot be evaluated analytically in general. However, this does not prohibit its use in practice, so long as we can approximate it via Random Features (RF) (Rahimi & Recht, 2008), which in our case is particularly natural as the kernel itself is defined via a random feature map. Thus, our kernel with the RF approximation can not only be used in small problems but also in large-scale settings with a large number of samples, where standard kernel methods with O(n2) complexity are no longer efficient enough and approximation methods, such as Random Features, must be employed (Rahimi & Recht, 2008). Given the RF approximation, one can then directly learn a target function as a linear function of the RF feature map, by minimizing a domain-specific empirical risk. It is worth noting that a recent work (Sinha & Duchi, 2016) that learns to select a set of random features by solving an optimization problem in an supervised setting is orthogonal to our D2KE approach and could be extended to develop a supervised D2KE method. We outline this overall RF based empirical risk minimization for our class of D2KE kernels in Algorithm 1. It is worth pointing out that in line 2 of Algorithm 1 the random feature embeddings are computed by a structured distance measure between the original structured inputs and the generated random structured inputs, followed by the application of the exponent function parameterized by γ. This is in contrast with traditional RF methods that translate the input data matrix into the embedding matrix via a matrix multiplication with random Gaussian matrix followed by a non-linearity. We will provide a detailed analysis of our estimator in Algorithm 1 in Section 5, and contrast its statistical performance to that of K-nearest-neighbor. Relationship to Representative-Set Method. A naive choice of p(ω) relates our approach to the representative-set method (RSM): setting Ω = X, with p(ω) = p(x). This gives us a kernel Equation (4) that depends on the data distribution. One can then obtain a Random-Feature approximation to the kernel in Equation (4) by holding out a part of the training data {x̂j}Rj=1 as samples from p(ω), and creating an R-dimensional feature embedding of the form: φ̂ j(x) := 1√ R exp ( −γd(x, x̂j) ) , j ∈ [R], (8) as in Algorithm 1. This is equivalent to a 1/ √ R-scaled version of the embedding function in the representative-set method (or similarity-as-features method) (Graepel et al., 1999; Pekalska et al., 2001; Pkkalska & Duin, 2005; Pekalska & Duin, 2006; 2008; Chen et al., 2009a; Duin & Pękalska, 2012), where one computes each sample’s similarity to a set of representatives as its feature representation. However, here by interpreting Equation (8) as a random-feature approximation to the kernel in Equation (4), we obtain a much nicer generalization error bound even in the case R→ ∞. This is in contrast to the analysis of RSM in (Chen et al., 2009a), where one has to keep the size of the representative set small (of the order O(n)) in order to have reasonable generalization performance. Effect of p(ω). The choice of p(ω) plays an important role in our kernel. Surprisingly, we found that many “close to uniform” choices of p(ω) in a variety of domains give better performance than for instance the choice of the data distribution p(ω) = p(x) (as in the representative-setmethod). Here are some examples from our experiments: i) In the time-series domain with dissimilarity computed via Dynamic Time Warping (DTW), a distribution p(ω) corresponding to random time series of length uniform in ∈ [2, 10], and with Gaussian-distributed elements, yields much better performance than the Representative-Set Method (RSM); ii) In string classification, with edit distance, a distribution p(ω) corresponding to random strings with elements uniformly drawn from the alphabet Σ yields much better performance than RSM; iii) When classifying sets of vectors with the Hausdorff distance in Equation (2), a distribution p(ω) corresponding to random sets of size uniform in ∈ [3, 15] with elements drawn uniformly from a unit sphere yields significantly better performance than RSM. We conjecture two potential reasons for the better performance of the chosen distributions p(ω) in these cases, though a formal theoretical treatment is an interesting subject we defer to future work. Firstly, as p(ω) is synthetic, one can generate unlimited number of random features, which results in a much better approximation to the exact kernel in Equation (4). In contrast, RSM requires held-out samples from the data, which could be quite limited for a small data set. Second, in some cases, even with a small or similar number of random features to RSM, the performance of the selected distribution still leads to significantly better results. For those cases we conjecture that the selected p(ω) generates objects that capture semantic information more relevant to the estimation of f (x), when coupled with our feature map under the dissimilarity measure d(x,ω). 5 Analysis In this section, we analyze the proposed framework from the perspectives of error decomposition. LetH be the RKHS corresponding to the kernel in Equation (4). Let fC := argmin f ∈H E[`( f (x), y)] s.t .‖ f ‖H ≤ C (9) be the population risk minimizer subject to the RKHS norm constraint ‖ f ‖H ≤ C. And let f̂n := argmin f ∈H 1 n n∑ i=1 `( f (xi), yi) s.t .‖ f ‖H ≤ C (10) be the corresponding empirical risk minimizer. In addition, let f̃R be the estimated function from our random feature approximation (Algorithm 1). Then denote the population and empirical risks as L( f ) and L̂( f ) respectively. We have the following risk decomposition L( f̃R) − L( f ) = (L( f̃R) − L( f̂n))︸ ︷︷ ︸ randomf eature + (L( f̂n) − L( fC))︸ ︷︷ ︸ estimation + (L( fC) − L( f ))︸ ︷︷ ︸ approximation In the following, we will discuss the three terms from the rightmost to the leftmost. Function Approximation Error. The RKHS implied by the kernel in Equation (4) is H := f f (x) = m∑j=1 αj k(xj,x), xj ∈ X, ∀ j ∈ [m], m ∈ N , which is a smaller function space than the space of Lipschitz-continuous function w.r.t. the distance d(x1,x2). As we show, any function f ∈ H is Lipschitz-continous w.r.t. the distance d(., .). Proposition 1. LetH be the RKHS corresponding to the kernel in Equation (4) derived from some metric d(., .). For any f ∈ H , | f (x1) − f (x2)| ≤ L f d(x1,x2) where L f = γC. We refer readers to the detailed proof in Appendix A.1. While any f in the RKHS is Lipschitzcontinuous w.r.t. the given distance d(., .), we are interested in imposing additional smoothness via the RKHS norm constraint ‖ f ‖H ≤ C, and by the kernel parameter γ. The hope is that the best function fC within this class approximates the true function f well in terms of the approximation error L( fC) − L( f ). The stronger assumption made by the RKHS gives us a qualitatively better estimation error, as discussed below. Estimation Error. Define Dλ as Dλ := ∞∑ j=1 1 1 + λ/µj where {µj}∞j=1 is the eigenvalues of the kernel in Equation (5) and λ is a tuning parameter. It holds that for any λ ≥ Dλ/n, with probability at least 1 − δ, L( f̂n) − L( fC) ≤ c(log 1δ )2C2λ for some universal constant c (Zhang, 2005). Here we would like to set λ as small as possible (as a function of n). By using the following kernel-independent bound: Dλ ≤ 1/λ, we have λ = 1/ √ n and thus a bound on the estimation error L( f̂n) − L( fC) ≤ c(log 1 δ )2C2 √ 1 n . (11) The estimation error is quite standard for a RKHS estimator. It has a much better dependency w.r.t. n (i.e. n−1/2) compared to that of k-nearest-neighbor method (i.e. n−2/(2+pX,d )) especially for higher effective dimension. A more careful analysis might lead to tighter bound on Dλ and also a better rate w.r.t. n. However, the analysis of Dλ for our kernel in Equation (4) is much more difficult than that of typical cases as we do not have an analytic form of the kernel. Random Feature Approximation. Denote L̂(.) as the empirical risk function. The error from RF approximation L( f̃R) − L( f̂n) can be further decomposed as (L( f̃R) − L̂( f̃R)) + (L̂( f̃R) − L̂( f̂n)) + (L̂( f̂n) − L( f̂n)) where the first and third terms can be bounded via the same estimation error bound in Equation (11), as both f̃R and f̂n have RKHS norm bounded by C. Therefore, in the following, we focus only on the second term of empirical risk. We start by analyzing the approximation error of the kernel ∆R(x1,x2) = k̃R(x1,x2) − k(x1,x2) where k̃R(x1,x2) := 1 R R∑ j=1 φ j(x1)φ j(x2). (12) Proposition 2. Let ∆R(x1,x2) = k(x1,x2) − k̃(x1,x2), we have uniform convergence of the form P { max x1,x2∈X |∆R(x1,x2)| > 2t } ≤ 2 ( 12γ t )2pX,d e−Rt 2/2, where pX,d is the effective dimension of X under metric d(., .). In other words, to guarantee |∆R(x1,x2)| ≤ with probability at least 1 − δ, it suffices to have R = Ω ( pX,d 2 log(γ ) + 1 2 log(1 δ ) ) . We refer readers to the detailed proof in Appendix A.2. Proposition 2 gives an approximation error in terms of kernel evaluation. To get a bound on the empirical risk L̂( f̃R) − L̂( f̂n), consider the optimal solution of the empirical risk minimization. By the Representer theorem we have f̂n(x) = 1n ∑ i αik(xi,x) and f̃R(x) = 1n ∑ i α̃i k̃(xi,x). Therefore, we have the following corollary. Corollary 1. To guarantee L̂( f̃R) − L̂( f̂n) ≤ , with probability 1 − δ, it suffices to have R = Ω ( pX,dM2 A2 2 log(γ ) + M 2 A2 2 log(1 δ ) ) . where M is the Lipschitz-continuous constant of the loss function `(., y), and A is a bound on ‖α‖1/n. We refer readers to the detailed proof in Appendix A.3. For most of loss functions, A and M are typically small constants. Therefore, Corollary 1 states that it suffices to have number of Random Features proportional to the effective dimension O(pX,d/ 2) to achieve an approximation error. Combining the three error terms, we can show that the proposed framework can achieve -suboptimal performance. Claim 1. Let f̃R be the estimated function from our random feature approximation based ERM estimator in Algorithm 1, and let f ∗ denote the desired target function. Suppose further that for some absolute constants c1, c2 > 0 (up to some logarithmic factor of 1/ and 1/δ): 1. The target function f ∗ lies close to the population risk minimizer fC lying in the RKHS spanned by the D2KE kernel: L( fC) − L( f ) ≤ /2. 2. The number of training samples n ≥ c1 C4/ 2. 3. The number of random features R ≥ c2pX,d/ 2. We then have that: L( f̃R) − L( f ∗) ≤ with probability 1 − δ. 6 Experiments We evaluate the proposed method in four different domains involving time-series, strings, texts, and images. First, we discuss the dissimilarity measures and data characteristics for each set of experiments. Then we introduce comparison among different distance-based methods and report corresponding results. Distance Measures. We have chosen three well-known dissimilarity measures: 1) Dynamic Time Warping (DTW), for time-series (Berndt & Clifford, 1994); 2) Edit Distance (Levenshtein distance), for strings (Navarro, 2001); 3) EarthMover’s distance (Rubner et al., 2000) formeasuring the semantic distance between two Bags of Words (using pretrained word vectors), for representing documents. 4) (Modified) Hausdorff distance (Huttenlocher et al., 1993; Dubuisson & Jain, 1994) for measuring the semantic closeness of two Bags of Visual Words (using SIFT vectors), for representing images. Note that Bag of (Visual)Words in 3) and 4) can also be regarded as a histogram. Since most distance measures are computationally demanding, having quadratic complexity, we adapted or implemented C-MEX programs for them; other codes were written in Matlab. Datasets. For each domain, we selected 4 datasets for our experiments. For time-series data, all are multivariate time-series and the length of each time-series varies from 2 to 205 observations; three are from the UCI Machine Learning repository (Frank & Asuncion, 2010), the other is generated from the IQ (In-phase and Quadrature components) samples from a wireless line-of-sight communication system from GMU. For string data, the size of alphabet is between 4 and 8 and the length of each string ranges from 34 to 198; two of them are from the UCI Machine Learning repository and the other two from the LibSVM Data Collection (Chang & Lin, 2011). For text data, all are chosen partially overlapped with these in (Kusner et al., 2015). The length of each document varies from 9.9 to 117. For image data, all of datasets were derived from Kaggle; we computed a set of SIFTdescriptors to represent each image and the size of SIFT feature vectors of each image varies from 1 to 914. We divided each dataset into 70/30 train and test subsets (if there was no predefined train/test split). Properties of these datasets are summarized in Table 6 in Appendix B. Baselines. We compare D2KE against 5 state-of-the-art baselines, including 1) KNN: a simple yet universal method to apply any distance measure to classification tasks; 2) DSK_RBF (Haasdonk & Bahlmann, 2004): distance substitution kernels, a general framework for kernel construction by substituting a problem specific distance measure in ordinary kernel functions. We use a Gaussian RBF kernel; 3) DSK_ND (Haasdonk & Bahlmann, 2004): another class of distance substitution kernels with negative distance; 4) KSVM (Loosli et al., 2016): learning directly from the similarity (indefinite) matrix followed in the original Krein Space; 5) RSM (Pekalska et al., 2001): building an embedding by computing distances from randomly selected representative samples. Among these baselines, KNN, DSK_RBF, DSK_ND, and KSVM have quadratic complexity O(N2L2) in both the number of data samples and the length of the sequences, while RSM has computational complexity O(NRL2), linear in the number of data samples but still quadratic in the length of the sequence. These compare to our method, D2KE, which has complexity O(NRL), linear in both the number of data samples and the length of the sequence. For each method, we search for the best parameters on the training set by performing 10-fold cross validation. For our new method D2KE, since we generate random samples from the distribution, we can use as many as needed to achieve performance close to an exact kernel. We report the best number in the range R = [4, 4096] (typically the larger R is, the better the accuracy). We employ a linear SVM implemented using LIBLINEAR (Fan et al., 2008) for all embedding-basedmethods (RSM andD2KE) and use LIBSVM (Chang & Lin, 2011) for precomputed dissimilairty kernels (DSK_RBF, DSK_ND, and KSVM). More details of experimental setup are provided in Appendix B. Results. As shown in Tables 2, 3, 4, and 5, D2KE can consistently outperform or match the baseline methods in terms of classification accuracy while requiring far less computation time. There are several observations worth making here. First, D2KE performs much better than KNN, supporting our claim that D2KE can be a strong alternative to KNN across applications. Second, compared to the two distance substitution kernels DSK_RBF and DSK_ND and the KSVM method operating directly on indefinite similarity matrix, our method can achieve much better performance, suggesting that a representation induced from a truly PD kernel makes significantly better use of the data than indefinite kernels. Among all methods, RSM is closest to our method in terms of practical construction of the feature matrix. However, the random objects (time-series, strings, or sets) sampled by D2KE perform significantly better, as we discussed in section 4. More detailed discussions of the experimental results for each domain are given in Appendix C. 7 Conclusion and Future Work In thiswork, we have proposed a general framework for deriving a positive-definite kernel and a feature embedding function from a given dissimilarity measure between input objects. The framework is especially useful for structured input domains such as sequences, time-series, and sets, where many well-established dissimilarity measures have been developed. Our framework subsumes at least two existing approaches as special or limiting cases, and opens up what we believe will be a useful new direction for creating embeddings of structured objects based on distance to random objects. A promising direction for extension is to develop such distance-based embeddings within a deep architecture to support use of structured inputs in an end-to-end learning system. A Proof of Theorem 1 and Theorem 2 A.1 Proof of Theorem 1 Proof. Note the function g(t) = exp(−γt) is Lipschitz-continuous with Lipschitz constant γ. Therefore, | f (x1) − f (x2)| = |〈 f , φ(x1) − φ(x2)〉| ≤ ‖ f ‖H ‖φ(x1) − φ(x2)‖H = ‖ f ‖H √∫ ω p(ω)(φω(x1) − φω(x2))2dω ≤ ‖ f ‖H √∫ ω p(ω)γ2 |d(x1,ω) − d(x2,ω)|2dω ≤ γ‖ f ‖H √∫ ω p(ω)d(x1,x2)2dω ≤ γ‖ f ‖Hd(x1,x2) ≤ γCd(x1,x2) A.2 Proof of Theorem 2 Proof. Our goal is to bound the magnitude of ∆R(x1,x2) = k̃R(x1,x2) − k(x1,x2). Since E[∆R(x1,x2)] = 0 and |∆R(x1,x2)| ≤ 1, from Hoefding’s inequality, we have P {|∆R(x1,x2)| ≥ t} ≤ 2 exp(−Rt2/2) a given input pair (x1,x2). To get a unim bound that holds ∀(x1,x2) ∈ X ×X, we find an -covering E of X w.r.t. d(., .) of size N( ,X, d). Applying union bound over the -covering E for x1 and x2, we have P { max x′1∈E,x′2∈E |∆R(x′1,x′2)| > t } ≤ 2|E |2 exp(−Rt2/2). (13) Then by the definition of E we have |d(x1,ω) − d(x′1,ω)| ≤ d(x1,x′1) ≤ . Together with the fact that exp(−γt) is Lipschitz-continuous with parameter γ for t ≥ 0, we have |φω(x1) − φω(x′1)| ≤ γ and thus | k̃R(x1,x2) − k̃R(x′1,x′2)| ≤ 3γ , |k(x1,x2) − k(x′1,x′2)| ≤ 3γ for γ chosen to be ≤ 1. This gives us |∆R(x1,x2) − ∆R(x′1,x′2)| ≤ 6γ (14) Combining equation 13 and equation 14, we have P { max x′1∈E,x′2∈E |∆R(x′1,x′2)| > t + 6γ } ≤ 2 ( 2 )2pX,d exp(−Rt2/2). (15) Choosing = t/6γ yields the result. A.3 Proof for Corollary 1 Proof. First of all, we have 1 n n∑ i=1 `(1 n n∑ j=1 α̃j k̃(xj,xi), yi) ≤ 1 n n∑ i=1 `(1 n n∑ j=1 αj k̃(xj,xi), yi) by the optimality of {α̃j}nj=1 w.r.t. the objective using the approximate kernel. Then we have L̂( f̃R) − L̂( f̂n) ≤ 1 n n∑ i=1 `(1 n n∑ j=1 αj k̃(xj,xi), yi) − `( 1 n n∑ j=1 αj k(xj,xi), yi) ≤ M ‖α‖1 n ( max x1,x2∈X | k̃(x1,x2) − k(x1,x2)| ) ≤ M A ( max x1,x2∈X | k̃(x1,x2) − k(x1,x2)| ) where A is a bound on ‖α‖1/n. Therefore to guarantee L̂( f̃R) − L̂( f̂n) ≤ we would need ( maxi, j∈[n] |∆R(x1,x2)| ) ≤ ̂ := /M A. Then applying Theorem 2 leads to the result. B General Experimental Settings General Setup. For each method, we search for the best parameters on the training set by performing 10-fold cross validation. Following (Haasdonk & Bahlmann, 2004), we use an exact RBF kernel for DSK_RBF while choosing squared distance for DSK_ND. We use the Matlab implementation provided by Loosli et al. (2016) to run experiments for KSVM. Similarly, we adopted a simple method – random selection – to obtain R = [4, 512] data samples as the representative set for RSM (Pekalska et al., 2001). For our new method D2KE, since we generate random samples from the distribution, we can use as many as needed to achieve performance close to an exact kernel. We report the best number in the range R = [4, 4096] (typically the larger R is, the better the accuracy). We employ a linear SVM implemented using LIBLINEAR (Fan et al., 2008) for all embedding-based methods (RSM, and D2KE) and use LIBSVM (Chang & Lin, 2011) for precomputed dissimilairty kernels (DSK_RBF, DSK_ND, and KSVM). All datasets are collected from popular public websites for Machine Learning and Data Science research, including the UCI Machine Learning repository (Frank & Asuncion, 2010), the LibSVM Data Collection (Chang & Lin, 2011), and the Kaggle Datasets, except one time-series dataset IQ that is shared from researchers fromGeorgeMason University. Table 6 lists the detailed properties of the datasets from four different domains. All computations were carried out on a DELL dual-socket system with Intel Xeon processors at 2.93GHz for a total of 16 cores and 250 GB of memory, running the SUSE Linux operating system. To accelerate the computation of all methods, we used multithreading with 12 threads total for various distance computations in all experiments. C Detailed Experimental Results on Time-Series, Strings, and Images C.1 Results on multivariate time-series Setup. For time-series data, we employed the most successful distance measure - DTW - for all methods. For all datasets, a Gaussian distribution was found to be applicable, parameterized by its bandwidth σ. The best values for σ and for the length of random time series were searched in the ranges [1e-3 1e3] and [2 50], respectively. Results. As shown in Table 2, D2KE can consistently outperform or match all other baselines in terms of classification accuracywhile requiring far less computation time formultivariate time-series. The first interesting observation is that our method performs substantially better than KNN, often by a large margin, i.e., D2KE achieves 26.62% higher performance than KNN on IQ_radio. This is because KNN is sensitive to the data noise common in real-world applications like IQ_radio, and has notoriously poor performance for high-dimensional data sets like Auslan. Moreover, compared to the two distance substitution kernels DSK_RBF and DSK_ND, and KSVM operating directly on indefinite similarity matrix, our method can achieve much better performance, suggesting that a representation induced from a truly p.d. kernel makes significantly better use of the data than indefinite kernels. Among all methods, RSM is closest to our method in terms of practical construction of the feature matrix. However, the random time series sampled by D2KE performs significantly better, as we discussed in section 4. First, RSM simply chooses a subset of the original data points and computes the distances between the whole dataset and this representative set; this may suffer significantly from noise or redundant information in the time-series. In contrast, our method samples a short random sequence that could both denoise and find the patterns in the data. Second, the number of data points that can be sampled is limited by the total size of the data while the number of possible random sequences drawn from the distribution is unlimited, making the feature space much more abundant. Third, RSMmay incur significant computational cost for long time-series, due to its quadratic complexity in length. C.2 Results on strings Setup. For string data, there are various well-known edit distances. Here, we choose Levenshtein distance as our distance measure since it can capture global alignments of the underlying strings. We first compute the alphabet from the original data and then uniformly sample characters from this alphabet to generate random strings. We search for the best parameters for γ in the range [1e-5 1], and for the length of random strings in the range [2 50], respectively. Results. As shown in Table 3, D2KE consistently performs better than or similarly to other distancebased baselines. Unlike the previous experiments where DTW is not a distance metric, Levenshtein distance is indeed a distance metric; this helps improve the performance of our baselines. However, D2KE still offers a clear advantage over baseline. It is interesting to note that the performance of DSK_RBF is quite close to our method’s, which may be due to DKS_RBF with Levenshtein distance producing a c.p.d. kernel which can essentially be converted into a p.d. kernel. Notice that on relatively large datasets, our method, D2KE, can achieve better performance, and often with far less computation than other baselines with quadratic complexity in both number and length of data samples. For instance, on mnist-str8 D2KE obtains higher accuracy with an order of magnitude less runtime compared to DSK_RBF and DSK_ND, and two orders of magnitude less than KSVM, due to higher computational costs both for kernel matrix construction and for eigendecomposition. C.3 Results on sets of Word Vectors for text Setup. For text data, following (Kusner et al., 2015) we use the earth mover’s distance as our distance measure between two documents, since this distance has recently demonstrated a strong performance when combining with KNN for document classifications. We first compute the Bag of Words for each document and represent each document as a histogram of word vectors, where google pretrained word vectors with dimension size 300 is used. We generate random documents consisting of each random word vectors uniformly sampled from the unit sphere of the embedding vector space R300. We search for the best parameters for γ in the range [1e-2 1e1], and for length of random document in the range [3 21]. Results. As shown in Table 4, D2KE outperforms other baselines on all four datasets. First of all, all distance based kernel methods perform better than KNN, illustrating the effectiveness of SVM over KNN on text data. Interestingly, D2KE also performs significantly better than other baselines by a notiably margin, in large part because document classification mainly associates with "topic" learning where our random documents of short length may fit this task particularly well. For the datasets with large number of documents and longer length of document, D2KE achieves about one order of magnitude speedup compared with other exact kernel/similarity methods, thanks to the use of random features in D2KE. C.4 Results on sets of SIFT-descriptors for images Setup. For image data, following (Pekalska et al., 2001; Haasdonk & Bahlmann, 2004) we use the modified Hausdorff distance (MHD) (Dubuisson & Jain, 1994) as our distance measure between images, since this distance has shown excellent performance in the literature (Sezgin & Sankur, 2004; Gao et al., 2012). We first applied the open-source OpenCV library to generate a sequence of SIFT-descriptors with dimension 128, then MHD to compute the distance between sets of SIFTdescriptors. We generate random images of each SIFT-descriptor uniformly sampled from the unit sphere of the embedding vector space R128. We search for the best parameters for γ in the range [1e-3 1e1], and for length of random SIFT-descriptor sequence in the range [3 15]. Results. As shown in Table 5, D2KE performance outperforms or matches other baselines in all cases. First, D2KE performs best in three cases while DSK_RBF is the best on dataset decor. This may be because the underlying SIFT features are not good enough and thus random features is not effective to find the good patterns quickly in images. Nevertheless, the quadratic complexity of DSK_RBF, DSK_ND, and KSVM in terms of both the number of images and the length of SIFT descriptor sequences makes it hard to scale to large data. Interestingly, D2KE still performs much better than KNN and RSM, which again supports our claim that D2KE can be a strong alternative to KNN and RSM across applications.
1. What is the focus of the paper in terms of kernel functions and distance functions? 2. What are the weaknesses of the paper regarding its novelty and potential impact on future research? 3. Does the reviewer have any concerns or questions about the paper's analysis of random feature approaches? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any specific areas where the reviewer believes the paper could be improved or expanded upon?
Review
Review The paper proposed a new way to define kernel functions using some distance function as random features. The paper also provides some standard analysis of the random feature approach to generalization ability. Overall, the novelty of the paper is low. It is not clear what's the benefit of this approach and it doesn't seems that one can build upon this and lead to new research directions.
ICLR
Title D2KE: From Distance to Kernel and Embedding via Random Features For Structured Inputs Abstract We present a new methodology that constructs a family of positive definite kernels from any given dissimilarity measure on structured inputs whose elements are either real-valued time series or discrete structures such as strings, histograms, and graphs. Our approach, which we call D2KE (from Distance to Kernel and Embedding), draws from the literature of Random Features. However, instead of deriving random feature maps from a user-defined kernel to approximate kernel machines, we build a kernel from a random feature map, that we specify given the distance measure. We further propose use of a finite number of random objects to produce a random feature embedding of each instance. We provide a theoretical analysis showing that D2KE enjoys better generalizability than universal Nearest-Neighbor estimates. On one hand, D2KE subsumes the widely-used representative-set method as a special case, and relates to the well-known distance substitution kernel in a limiting case. On the other hand, D2KE generalizes existing Random Features methods applicable only to vector input representations to complex structured inputs of variable sizes. We conduct classification experiments over such disparate domains as time series, strings, and histograms (for texts and images), for which our proposed framework compares favorably to existing distance-based learning methods in terms of both testing accuracy and computational time. 1 Introduction In many problem domains, it is easier to specify a reasonable dissimilarity (or similarity) function between instances than to construct a feature representation. This is particularly the case with structured inputs whose elements are either real-valued time series or discrete structures such as strings, histograms, and graphs, where it is typically less than clear how to construct the representation of entire structured inputs with potentially widely varying sizes, even when given a good feature representation of each individual component. Moreover, even for complex structured inputs, there are many well-developed dissimilarity measures, such as the Dynamic Time Warping measure between time series, Edit Distance between strings, Hausdorff distance between sets, andWasserstein distance between distributions. However, standard machine learning methods are designed for vector representations, and classically there has been far less work on distance-based methods for either classification or regression on structured inputs. The most common distance-based method is Nearest-Neighbor Estimation (NNE), which predicts the outcome for an instance using an average of its nearest neighbors in the input space, with nearness measured by the given dissimilarity measure. Estimation from nearest neighbors, however, is unreliable, specifically having high variance when the neighbors are far apart, which is typically the case when the intrinsic dimension implied by the distance is large. To address this issue, a line of research has focused on developing global distance-based (or similaritybased) machine learning methods (Pkkalska & Duin, 2005; Duin & Pękalska, 2012; Balcan et al., 2008a; Cortes et al., 2012), in large part by drawing upon connections to kernel methods (Scholkopf et al., 1999) or directly learning with similarity functions (Balcan et al., 2008a; Cortes et al., 2012; Balcan et al., 2008b; Loosli et al., 2016); we refer the reader in particular to the survey in (Chen et al., 2009a). Among these, the most direct approach treats the data similarity matrix (or transformed dissimilarity matrix) as a kernel Gram matrix, and then uses standard kernel-based methods such as Support Vector Machines (SVM) or kernel ridge regression with this Gram matrix. A key caveat with this approach however is that most similarity (or dissimilarity) measures do not provide a positive-definite (PD) kernel, so that the empirical risk minimization problem is not well-defined, and moreover becomes non-convex (Ong et al., 2004; Lin & Lin, 2003). A line of work has therefore focused on estimating a positive-definite (PD) Gram matrix that merely approximates the similarity matrix. This could be achieved for instance by clipping, or flipping, or shifting eigenvalues of the similarity matrix (Pekalska et al., 2001), or explicitly learning a PD approximation of the similarity matrix (Chen &Ye, 2008; Chen et al., 2009b). Such modifications of the similaritymatrix however often leads to a loss of information; moreover, the enforced PD property is typically guaranteed to hold only on the training data, resulting in an inconsistency between the set of testing and training samples (Chen et al., 2009a) 1. Another common approach is to select a subset of training samples as a held-out representative set, and use distances or similarities to structured inputs in the set as the feature function (Graepel et al., 1999; Pekalska et al., 2001). As we will show, with proper scaling, this approach can be interpreted as a special instance of our framework. Furthermore, our framework provides a more general and richer family of kernels, many of which significantly outperform the representative-set method in a variety of application domains. To address the aforementioned issues, in this paper, we propose a novel general framework that constructs a family of PD kernels from a dissimilarity measure on structured inputs. Our approach, which we call D2KE (fromDistance to Kernel and Embedding), draws from the literature of Random Features (Rahimi & Recht, 2008), but instead of deriving feature maps from an existing kernel for approximating kernel machines, we build novel kernels from a random feature map specifically designed for a given distance measure. The kernel satisfies the property that functions in the corresponding Reproducing Kernel Hilbert Space (RKHS) are Lipschitz-continuous w.r.t. the given distance measure. We also provide a tractable estimator for a function from this RKHS which enjoys much better generalization properties than nearest-neighbor estimation. Our framework produces a feature embedding and consequently a vector representation of each instance that can be employed by any classification and regression models. In classification experiments in such disparate domains as strings, time series, and histograms (for texts and images), our proposed framework compares favorably to existing distance-based learning methods in terms of both testing accuracy and computational time, especially when the number of data samples is large and/or the size of structured inputs is large. We highlight our main contributions as follows: • From the perspective of distance kernel learning, we propose for the first time amethodology that constructs a family of PD kernels via Random Features from a given distance measure for structured inputs, and provide theoretical and empirical justifications for this framework. • From the perspective of Random Features (RF) methods, we generalize existing Random Features methods applied only to vector input representations to complex structured inputs of variable sizes. To the best of our knowledge, this is the first time that a generic RFmethod has been used to accelerate kernel machines on structured inputs across a broad range of domains such as time-series, strings, and the histograms. 2 Related Work Distance-Based Kernel Learning. Existing approaches either require strict conditions on the distance function (e.g. that the distance be isometric to the square of the Euclidean distance) (Haasdonk & Bahlmann, 2004; Schölkopf, 2001), or construct empirical PD Gram matrices that do not necessarily generalize to the test samples (Pekalska et al., 2001; Pkkalska & Duin, 2005; Pekalska & Duin, 2006; 2008; Duin & Pękalska, 2012). Haasdonk & Bahlmann (2004) and Schölkopf (2001) provide conditions under which one can obtain a PD kernel through simple transformations of the distance measure, but which are not satisfied for many commonly used dissimilarity measures such as Dynamic TimeWarping, Hausdorff distance, and EarthMover’s distance (Haasdonk &Bahlmann, 1A generalization error bound was provided for the similarity-as-kernel approach in (Chen et al., 2009a), but only for a positive-definite similarity function. 2004). Equivalently, one could also find a Euclidean embedding (also known as dissimilarity representation) approximating the dissimilarity matrix as in Multidimensional Scaling (Pekalska et al., 2001; Pkkalska & Duin, 2005; Pekalska & Duin, 2006; 2008; Duin & Pękalska, 2012) 2. Differently, Loosli et al. (2016) presented a theoretical foundation for an SVM solver in Krein spaces and directly evaluated a solution that uses the original (indefinite) similarity measure. There are also some specific approaches dedicated to building a PD kernel on some structured inputs such as text and time-series (Collins & Duffy, 2002; Cuturi, 2011), that modify a distance function over sequences to a kernel by replacing the minimization over possible alignments into a summation over all possible alignments. This type of kernel, however, results in a diagonal-dominance problem, where the diagonal entries of the kernel Gram matrix are orders of magnitude larger than the off-diagonal entries, due to the summation over a huge number of alignments with a sample itself. Random Features Methods. Interest in approximating non-linear kernel machines using randomized feature maps has surged in recent years due to a significant reduction in training and testing times for kernel based learning algorithms (Dai et al., 2014). There are numerous explicit nonlinear random feature maps that have been constructed for various types of kernels, including Gaussian and Laplacian Kernels (Rahimi & Recht, 2008; Wu et al., 2016), intersection kernels (Maji & Berg, 2009), additive kernels Vedaldi & Zisserman (2012), dot product kernels (Kar & Karnick, 2012; Pennington et al., 2015), and semigroup kernels (Mukuta et al., 2018). Among them, the Random Fourier Features (RFF) method, which approximates a Gaussian Kernel function by means of multiplying the input with a Gaussian random matrix, and its fruitful variants have been extensively studied both theoretically and empirically (Sriperumbudur & Szabó, 2015; Felix et al., 2016; Rudi & Rosasco, 2017; Bach, 2017; Choromanski et al., 2018). To accelerate the RFF on input data matrix with high dimensions, a number of methods have been proposed to leverage structured matrices to allow faster matrix computation and less memory consumption (Le et al., 2013; Hamid et al., 2014; Choromanski & Sindhwani, 2016). However, all the aforementioned RFmethods merely consider inputs with vector representations, and compute the RF by a linear transformation that is either a matrix multiplication or an inner product under Euclidean distance metric. In contrast, D2KE takes structured inputs of potentially different sizes and computes the RF with a structured distance metric (typically with dynamic programming or optimal transportation). Another important difference between D2KE and existing RF methods lies in the fact that existing RF work assumes a user-defined kernel and then derives a randomfeature map, while D2KE constructs a new PD kernel through a random feature map and makes it computationally feasible via RF. The table 1 lists the differences between D2KE and existing RF methods. A very recent piece of work (Wu et al., 2018) has developed a kernel and a specific algorithm for computing embeddings of single-variable real-valued time-series. However, despite promising results, this method cannot be applied on discrete structured inputs such as strings, histograms, and graphs. In contrast, we have an unified framework for various structured inputs beyond the limits of (Wu et al., 2018) and provide a general theoretical analysis w.r.t KNN and other generic distance-based kernel methods. 3 Problem Setup We consider the estimation of a target function f : X → R from a collection of samples {(xi, yi)}ni=1, where xi ∈ X is the structured input object, and yi ∈ Y is the output observation associated with the target function f (xi). For instance, in a regression problem, yi ∼ f (xi) + ωi ∈ R for some random noise ωi , and in binary classification, we have yi ∈ {0, 1} with P(yi = 1|xi) = f (xi). We are given a dissimilarity measure d : X ×X → R between input objects instead of a feature representation of x. 2A proof of the equivalence between PD of similarity matrix and Euclidean of dissimilarity matrix can be found in (Borg & Groenen, 1997). Note that the size structured inputs xi may vary widely, e.g. strings with variable lengths or graphs with different sizes. For some of the analyses, we require the dissimilarity measure to be a metric as follows. Assumption 1 (Distance Metric). d : X × X → R is a distance metric, that is, it satisfies (i) d(x1,x2) ≥ 0, (ii) d(x1,x2) = 0 ⇐⇒ x1 = x2, (iii) d(x1,x2) = d(x2,x1), and (iv) d(x1,x2) ≤ d(x1,x3) + d(x3,x2). 3.1 Function Continuity and Space Covering An ideal feature representation for the learning task is (i) compact and (ii) such that the target function f (x) is a simple (e.g. linear) function of the resulting representation. Similarly, an ideal dissimilarity measure d(x1,x2) for learning a target function f (x) should satisfy certain properties. On one hand, a small dissimilarity d(x1,x2) between two objects should imply small difference in the function values | f (x1) − f (x2)|. On the other hand, we want a small expected distance among samples, so that the data lies in a compact space of small intrinsic dimension. We next build up some definitions to formalize these properties. Assumption 2 (Lipschitz Continuity). For any x1,x2 ∈ X, there exists some constant L > 0 such that | f (x1) − f (x2)| ≤ L d(x1,x2), (1) We would prefer the target function to have a small Lipschitz-continuity constant L with respect to the dissimilarity measure d(., .). Such Lipschitz-continuity alone however might not suffice. For example, one can simply set d(x1,x2) = ∞ for any x1 , x2 to satisfy Eq. equation 1. We thus need the following quantity that measures the size of the space implied by a given dissimilarity measure. Definition 1 (Covering Number). Assuming d is a metric. A δ-cover of X w.r.t. d(., .) is a set E s.t. ∀x ∈ X, ∃xi ∈ E, d(x,xi) ≤ δ. Then the covering number N(δ;X, d) is the size of the smallest δ-cover for Xwith respect to d. Assuming the input domain X is compact, the covering number N(δ;X, d) measures its size w.r.t. the distance measure d. We show how the two quantities defined above affect the estimation error of a Nearest-Neighbor Estimator. 3.2 Effective Dimension and Nearest Neighbor Estimation We extend the standard analysis of the estimation error of k-nearest-neighbor from finite-dimensional vector spaces to any structured input space X, with an associated distance measure d, and a finite covering number N(δ;X, d), by defining the effective dimension as follows. Assumption 3 (Effective Dimension). Let the effective dimension pX,d > 0 be the minimum p satisfying ∃c > 0, ∀δ : 0 < δ < 1, N(δ;X, d) ≤ c ( 1 δ )p . Here we provide an example of effective dimension in case of measuring the space of Multiset. Multiset with Hausdorff Distance. A multiset is a set that allows duplicate elements. Consider two multisets x1 = {ui}Mi=1, x2 = {vj}Nj=1. Let ∆(ui, vj) be a ground distance that measures the distance between two elements ui, vj ∈ V in a set. The (modified) Hausdorff Distance (Dubuisson & Jain, 1994) can be defined as d(x1,x2) := max{ 1 N N∑ i=1 min j∈[M] ∆(ui, vj), 1 M M∑ j=1 min i∈[N ] ∆(vj,ui)} (2) Let N(δ;V,∆) be the covering number of V under the ground distance ∆. Let X denote the set of all sets of size bounded by L. By constructing a covering of X containing any set of size less or equal than L with its elements taken from the covering of V, we have N(δ;X, d) ≤ N(δ;V;∆)L . Therefore, pX,d ≤ L log N(δ;V,∆). For example, ifV := {v ∈ Rp | ‖v‖2 ≤ 1} and ∆ is Euclidean distance, we have N(δ;V,∆) = (1 + 2δ )p and pX,d ≤ Lp. Equippedwith the concept of effective dimension, we can obtain the following bound on the estimation error of the k-Nearest-Neighbor estimate of f (x). Theorem 1. LetVar(y | f (x)) ≤ σ2, and f̂n be the k-Nearest Neighbor estimate of the target function f constructed from a training set of size n. Denote p := pX,d . We have Ex [( f̂n(x) − f (x) )2] ≤ σ 2 k + cL2 ( k n )2/p for some constant c > 0. For σ > 0, minimizing RHS w.r.t. the parameter k, we have Ex [( f̂n(x) − f (x) )2] ≤ c2σ 4 p+2 L 2p 2+p ( 1 n ) 2 2+p (3) for some constant c2 > 0. Proof. The proof is almost the same to a standard analysis of k-NN’s estimation error in, for example, (Györfi et al., 2006), with the space partition number replaced by the covering number, and dimension replaced by the effective dimension in Assumption 3. When pX,d is reasonably large, the estimation error of k-NN decreases quite slowly with n. Thus, for the estimation error to be bounded by , requires the number of samples to scale exponentially in pX,d . In the following sections, we develop an estimator f̂ based on a RKHS derived from the distance measure, with a considerably better sample complexity for problems with higher effective dimension. 4 From Distance to Kernel for Structured Inputs We aim to address the long-standing problem of how to convert a distance measure into a positivedefinite kernel. Here we introduce a simple but effective approach D2KE that constructs a family of positive-definite kernels from a given distance measure. Given an structured input domain X and a distance measure d(., .), we construct a family of kernels as k(x, y) := ∫ p(ω)φω(x)φω(y)dω,where φω(x) := exp(−γd(x,ω)), (4) whereω ∈ Ω is a random structured object whose elements could be real-valued time-series, strings, and histograms, p(ω) is a distribution over Ω, and φω(x) is a feature map derived from the distance of x to all random objects ω ∈ Ω. The kernel is parameterized by both p(ω) and γ. Relationship toDistance SubstitutionKernel. An insightful interpretation of the kernel in Equation (4) can be obtained by expressing the kernel in Equation (4) as exp ( −γsoftminp(ω){d(x,ω) + d(ω, y)} ) (5) where the soft minimum function, parameterized by p(ω) and γ, is defined as softminp(ω) f (ω) := − 1 γ log ∫ p(ω)e−γ f (ω)dω. (6) Therefore, the kernel k(x, y) can be interpreted as a soft version of the distance substitution kernel (Haasdonk & Bahlmann, 2004), where instead of substituting d(x, y) into the exponent, it substitutes a soft version of the form softminp(ω){d(x,ω) + d(ω, y)}. (7) Note when γ → ∞, the value of Equation (7) is determined by minω∈Ω d(x,ω) + d(ω, y), which equals d(x, y) if X ⊆ Ω, since it cannot be smaller than d(x, y) by the triangle inequality. In other words, when X ⊆ Ω, k(x, y) → exp(−γd(x, y)) as γ →∞. On the other hand, unlike the distance-substituion kernel, our kernel in Equation (5) is always PD by construction. Algorithm 1 Random Feature Approximation of function in RKHS with the kernel in Equation 4 1: Draw R samples from p(ω) to get {ωj}Rj=1. 2: Set the R-dimensional feature embedding as φ̂j(x) = 1√ R exp(−γd(x,ωj)), ∀ j ∈ [R] 3: Solve the following problem for some µ > 0: ŵ := argmin w∈RR 1 n n∑ i=1 `(wT φ̂(xi), yi) + µ 2 ‖w‖2 4: Output the estimated function f̃R(x) := ŵT φ̂(x). Random Feature Approximation. The reader might have noticed that the kernel in Equation (4) cannot be evaluated analytically in general. However, this does not prohibit its use in practice, so long as we can approximate it via Random Features (RF) (Rahimi & Recht, 2008), which in our case is particularly natural as the kernel itself is defined via a random feature map. Thus, our kernel with the RF approximation can not only be used in small problems but also in large-scale settings with a large number of samples, where standard kernel methods with O(n2) complexity are no longer efficient enough and approximation methods, such as Random Features, must be employed (Rahimi & Recht, 2008). Given the RF approximation, one can then directly learn a target function as a linear function of the RF feature map, by minimizing a domain-specific empirical risk. It is worth noting that a recent work (Sinha & Duchi, 2016) that learns to select a set of random features by solving an optimization problem in an supervised setting is orthogonal to our D2KE approach and could be extended to develop a supervised D2KE method. We outline this overall RF based empirical risk minimization for our class of D2KE kernels in Algorithm 1. It is worth pointing out that in line 2 of Algorithm 1 the random feature embeddings are computed by a structured distance measure between the original structured inputs and the generated random structured inputs, followed by the application of the exponent function parameterized by γ. This is in contrast with traditional RF methods that translate the input data matrix into the embedding matrix via a matrix multiplication with random Gaussian matrix followed by a non-linearity. We will provide a detailed analysis of our estimator in Algorithm 1 in Section 5, and contrast its statistical performance to that of K-nearest-neighbor. Relationship to Representative-Set Method. A naive choice of p(ω) relates our approach to the representative-set method (RSM): setting Ω = X, with p(ω) = p(x). This gives us a kernel Equation (4) that depends on the data distribution. One can then obtain a Random-Feature approximation to the kernel in Equation (4) by holding out a part of the training data {x̂j}Rj=1 as samples from p(ω), and creating an R-dimensional feature embedding of the form: φ̂ j(x) := 1√ R exp ( −γd(x, x̂j) ) , j ∈ [R], (8) as in Algorithm 1. This is equivalent to a 1/ √ R-scaled version of the embedding function in the representative-set method (or similarity-as-features method) (Graepel et al., 1999; Pekalska et al., 2001; Pkkalska & Duin, 2005; Pekalska & Duin, 2006; 2008; Chen et al., 2009a; Duin & Pękalska, 2012), where one computes each sample’s similarity to a set of representatives as its feature representation. However, here by interpreting Equation (8) as a random-feature approximation to the kernel in Equation (4), we obtain a much nicer generalization error bound even in the case R→ ∞. This is in contrast to the analysis of RSM in (Chen et al., 2009a), where one has to keep the size of the representative set small (of the order O(n)) in order to have reasonable generalization performance. Effect of p(ω). The choice of p(ω) plays an important role in our kernel. Surprisingly, we found that many “close to uniform” choices of p(ω) in a variety of domains give better performance than for instance the choice of the data distribution p(ω) = p(x) (as in the representative-setmethod). Here are some examples from our experiments: i) In the time-series domain with dissimilarity computed via Dynamic Time Warping (DTW), a distribution p(ω) corresponding to random time series of length uniform in ∈ [2, 10], and with Gaussian-distributed elements, yields much better performance than the Representative-Set Method (RSM); ii) In string classification, with edit distance, a distribution p(ω) corresponding to random strings with elements uniformly drawn from the alphabet Σ yields much better performance than RSM; iii) When classifying sets of vectors with the Hausdorff distance in Equation (2), a distribution p(ω) corresponding to random sets of size uniform in ∈ [3, 15] with elements drawn uniformly from a unit sphere yields significantly better performance than RSM. We conjecture two potential reasons for the better performance of the chosen distributions p(ω) in these cases, though a formal theoretical treatment is an interesting subject we defer to future work. Firstly, as p(ω) is synthetic, one can generate unlimited number of random features, which results in a much better approximation to the exact kernel in Equation (4). In contrast, RSM requires held-out samples from the data, which could be quite limited for a small data set. Second, in some cases, even with a small or similar number of random features to RSM, the performance of the selected distribution still leads to significantly better results. For those cases we conjecture that the selected p(ω) generates objects that capture semantic information more relevant to the estimation of f (x), when coupled with our feature map under the dissimilarity measure d(x,ω). 5 Analysis In this section, we analyze the proposed framework from the perspectives of error decomposition. LetH be the RKHS corresponding to the kernel in Equation (4). Let fC := argmin f ∈H E[`( f (x), y)] s.t .‖ f ‖H ≤ C (9) be the population risk minimizer subject to the RKHS norm constraint ‖ f ‖H ≤ C. And let f̂n := argmin f ∈H 1 n n∑ i=1 `( f (xi), yi) s.t .‖ f ‖H ≤ C (10) be the corresponding empirical risk minimizer. In addition, let f̃R be the estimated function from our random feature approximation (Algorithm 1). Then denote the population and empirical risks as L( f ) and L̂( f ) respectively. We have the following risk decomposition L( f̃R) − L( f ) = (L( f̃R) − L( f̂n))︸ ︷︷ ︸ randomf eature + (L( f̂n) − L( fC))︸ ︷︷ ︸ estimation + (L( fC) − L( f ))︸ ︷︷ ︸ approximation In the following, we will discuss the three terms from the rightmost to the leftmost. Function Approximation Error. The RKHS implied by the kernel in Equation (4) is H := f f (x) = m∑j=1 αj k(xj,x), xj ∈ X, ∀ j ∈ [m], m ∈ N , which is a smaller function space than the space of Lipschitz-continuous function w.r.t. the distance d(x1,x2). As we show, any function f ∈ H is Lipschitz-continous w.r.t. the distance d(., .). Proposition 1. LetH be the RKHS corresponding to the kernel in Equation (4) derived from some metric d(., .). For any f ∈ H , | f (x1) − f (x2)| ≤ L f d(x1,x2) where L f = γC. We refer readers to the detailed proof in Appendix A.1. While any f in the RKHS is Lipschitzcontinuous w.r.t. the given distance d(., .), we are interested in imposing additional smoothness via the RKHS norm constraint ‖ f ‖H ≤ C, and by the kernel parameter γ. The hope is that the best function fC within this class approximates the true function f well in terms of the approximation error L( fC) − L( f ). The stronger assumption made by the RKHS gives us a qualitatively better estimation error, as discussed below. Estimation Error. Define Dλ as Dλ := ∞∑ j=1 1 1 + λ/µj where {µj}∞j=1 is the eigenvalues of the kernel in Equation (5) and λ is a tuning parameter. It holds that for any λ ≥ Dλ/n, with probability at least 1 − δ, L( f̂n) − L( fC) ≤ c(log 1δ )2C2λ for some universal constant c (Zhang, 2005). Here we would like to set λ as small as possible (as a function of n). By using the following kernel-independent bound: Dλ ≤ 1/λ, we have λ = 1/ √ n and thus a bound on the estimation error L( f̂n) − L( fC) ≤ c(log 1 δ )2C2 √ 1 n . (11) The estimation error is quite standard for a RKHS estimator. It has a much better dependency w.r.t. n (i.e. n−1/2) compared to that of k-nearest-neighbor method (i.e. n−2/(2+pX,d )) especially for higher effective dimension. A more careful analysis might lead to tighter bound on Dλ and also a better rate w.r.t. n. However, the analysis of Dλ for our kernel in Equation (4) is much more difficult than that of typical cases as we do not have an analytic form of the kernel. Random Feature Approximation. Denote L̂(.) as the empirical risk function. The error from RF approximation L( f̃R) − L( f̂n) can be further decomposed as (L( f̃R) − L̂( f̃R)) + (L̂( f̃R) − L̂( f̂n)) + (L̂( f̂n) − L( f̂n)) where the first and third terms can be bounded via the same estimation error bound in Equation (11), as both f̃R and f̂n have RKHS norm bounded by C. Therefore, in the following, we focus only on the second term of empirical risk. We start by analyzing the approximation error of the kernel ∆R(x1,x2) = k̃R(x1,x2) − k(x1,x2) where k̃R(x1,x2) := 1 R R∑ j=1 φ j(x1)φ j(x2). (12) Proposition 2. Let ∆R(x1,x2) = k(x1,x2) − k̃(x1,x2), we have uniform convergence of the form P { max x1,x2∈X |∆R(x1,x2)| > 2t } ≤ 2 ( 12γ t )2pX,d e−Rt 2/2, where pX,d is the effective dimension of X under metric d(., .). In other words, to guarantee |∆R(x1,x2)| ≤ with probability at least 1 − δ, it suffices to have R = Ω ( pX,d 2 log(γ ) + 1 2 log(1 δ ) ) . We refer readers to the detailed proof in Appendix A.2. Proposition 2 gives an approximation error in terms of kernel evaluation. To get a bound on the empirical risk L̂( f̃R) − L̂( f̂n), consider the optimal solution of the empirical risk minimization. By the Representer theorem we have f̂n(x) = 1n ∑ i αik(xi,x) and f̃R(x) = 1n ∑ i α̃i k̃(xi,x). Therefore, we have the following corollary. Corollary 1. To guarantee L̂( f̃R) − L̂( f̂n) ≤ , with probability 1 − δ, it suffices to have R = Ω ( pX,dM2 A2 2 log(γ ) + M 2 A2 2 log(1 δ ) ) . where M is the Lipschitz-continuous constant of the loss function `(., y), and A is a bound on ‖α‖1/n. We refer readers to the detailed proof in Appendix A.3. For most of loss functions, A and M are typically small constants. Therefore, Corollary 1 states that it suffices to have number of Random Features proportional to the effective dimension O(pX,d/ 2) to achieve an approximation error. Combining the three error terms, we can show that the proposed framework can achieve -suboptimal performance. Claim 1. Let f̃R be the estimated function from our random feature approximation based ERM estimator in Algorithm 1, and let f ∗ denote the desired target function. Suppose further that for some absolute constants c1, c2 > 0 (up to some logarithmic factor of 1/ and 1/δ): 1. The target function f ∗ lies close to the population risk minimizer fC lying in the RKHS spanned by the D2KE kernel: L( fC) − L( f ) ≤ /2. 2. The number of training samples n ≥ c1 C4/ 2. 3. The number of random features R ≥ c2pX,d/ 2. We then have that: L( f̃R) − L( f ∗) ≤ with probability 1 − δ. 6 Experiments We evaluate the proposed method in four different domains involving time-series, strings, texts, and images. First, we discuss the dissimilarity measures and data characteristics for each set of experiments. Then we introduce comparison among different distance-based methods and report corresponding results. Distance Measures. We have chosen three well-known dissimilarity measures: 1) Dynamic Time Warping (DTW), for time-series (Berndt & Clifford, 1994); 2) Edit Distance (Levenshtein distance), for strings (Navarro, 2001); 3) EarthMover’s distance (Rubner et al., 2000) formeasuring the semantic distance between two Bags of Words (using pretrained word vectors), for representing documents. 4) (Modified) Hausdorff distance (Huttenlocher et al., 1993; Dubuisson & Jain, 1994) for measuring the semantic closeness of two Bags of Visual Words (using SIFT vectors), for representing images. Note that Bag of (Visual)Words in 3) and 4) can also be regarded as a histogram. Since most distance measures are computationally demanding, having quadratic complexity, we adapted or implemented C-MEX programs for them; other codes were written in Matlab. Datasets. For each domain, we selected 4 datasets for our experiments. For time-series data, all are multivariate time-series and the length of each time-series varies from 2 to 205 observations; three are from the UCI Machine Learning repository (Frank & Asuncion, 2010), the other is generated from the IQ (In-phase and Quadrature components) samples from a wireless line-of-sight communication system from GMU. For string data, the size of alphabet is between 4 and 8 and the length of each string ranges from 34 to 198; two of them are from the UCI Machine Learning repository and the other two from the LibSVM Data Collection (Chang & Lin, 2011). For text data, all are chosen partially overlapped with these in (Kusner et al., 2015). The length of each document varies from 9.9 to 117. For image data, all of datasets were derived from Kaggle; we computed a set of SIFTdescriptors to represent each image and the size of SIFT feature vectors of each image varies from 1 to 914. We divided each dataset into 70/30 train and test subsets (if there was no predefined train/test split). Properties of these datasets are summarized in Table 6 in Appendix B. Baselines. We compare D2KE against 5 state-of-the-art baselines, including 1) KNN: a simple yet universal method to apply any distance measure to classification tasks; 2) DSK_RBF (Haasdonk & Bahlmann, 2004): distance substitution kernels, a general framework for kernel construction by substituting a problem specific distance measure in ordinary kernel functions. We use a Gaussian RBF kernel; 3) DSK_ND (Haasdonk & Bahlmann, 2004): another class of distance substitution kernels with negative distance; 4) KSVM (Loosli et al., 2016): learning directly from the similarity (indefinite) matrix followed in the original Krein Space; 5) RSM (Pekalska et al., 2001): building an embedding by computing distances from randomly selected representative samples. Among these baselines, KNN, DSK_RBF, DSK_ND, and KSVM have quadratic complexity O(N2L2) in both the number of data samples and the length of the sequences, while RSM has computational complexity O(NRL2), linear in the number of data samples but still quadratic in the length of the sequence. These compare to our method, D2KE, which has complexity O(NRL), linear in both the number of data samples and the length of the sequence. For each method, we search for the best parameters on the training set by performing 10-fold cross validation. For our new method D2KE, since we generate random samples from the distribution, we can use as many as needed to achieve performance close to an exact kernel. We report the best number in the range R = [4, 4096] (typically the larger R is, the better the accuracy). We employ a linear SVM implemented using LIBLINEAR (Fan et al., 2008) for all embedding-basedmethods (RSM andD2KE) and use LIBSVM (Chang & Lin, 2011) for precomputed dissimilairty kernels (DSK_RBF, DSK_ND, and KSVM). More details of experimental setup are provided in Appendix B. Results. As shown in Tables 2, 3, 4, and 5, D2KE can consistently outperform or match the baseline methods in terms of classification accuracy while requiring far less computation time. There are several observations worth making here. First, D2KE performs much better than KNN, supporting our claim that D2KE can be a strong alternative to KNN across applications. Second, compared to the two distance substitution kernels DSK_RBF and DSK_ND and the KSVM method operating directly on indefinite similarity matrix, our method can achieve much better performance, suggesting that a representation induced from a truly PD kernel makes significantly better use of the data than indefinite kernels. Among all methods, RSM is closest to our method in terms of practical construction of the feature matrix. However, the random objects (time-series, strings, or sets) sampled by D2KE perform significantly better, as we discussed in section 4. More detailed discussions of the experimental results for each domain are given in Appendix C. 7 Conclusion and Future Work In thiswork, we have proposed a general framework for deriving a positive-definite kernel and a feature embedding function from a given dissimilarity measure between input objects. The framework is especially useful for structured input domains such as sequences, time-series, and sets, where many well-established dissimilarity measures have been developed. Our framework subsumes at least two existing approaches as special or limiting cases, and opens up what we believe will be a useful new direction for creating embeddings of structured objects based on distance to random objects. A promising direction for extension is to develop such distance-based embeddings within a deep architecture to support use of structured inputs in an end-to-end learning system. A Proof of Theorem 1 and Theorem 2 A.1 Proof of Theorem 1 Proof. Note the function g(t) = exp(−γt) is Lipschitz-continuous with Lipschitz constant γ. Therefore, | f (x1) − f (x2)| = |〈 f , φ(x1) − φ(x2)〉| ≤ ‖ f ‖H ‖φ(x1) − φ(x2)‖H = ‖ f ‖H √∫ ω p(ω)(φω(x1) − φω(x2))2dω ≤ ‖ f ‖H √∫ ω p(ω)γ2 |d(x1,ω) − d(x2,ω)|2dω ≤ γ‖ f ‖H √∫ ω p(ω)d(x1,x2)2dω ≤ γ‖ f ‖Hd(x1,x2) ≤ γCd(x1,x2) A.2 Proof of Theorem 2 Proof. Our goal is to bound the magnitude of ∆R(x1,x2) = k̃R(x1,x2) − k(x1,x2). Since E[∆R(x1,x2)] = 0 and |∆R(x1,x2)| ≤ 1, from Hoefding’s inequality, we have P {|∆R(x1,x2)| ≥ t} ≤ 2 exp(−Rt2/2) a given input pair (x1,x2). To get a unim bound that holds ∀(x1,x2) ∈ X ×X, we find an -covering E of X w.r.t. d(., .) of size N( ,X, d). Applying union bound over the -covering E for x1 and x2, we have P { max x′1∈E,x′2∈E |∆R(x′1,x′2)| > t } ≤ 2|E |2 exp(−Rt2/2). (13) Then by the definition of E we have |d(x1,ω) − d(x′1,ω)| ≤ d(x1,x′1) ≤ . Together with the fact that exp(−γt) is Lipschitz-continuous with parameter γ for t ≥ 0, we have |φω(x1) − φω(x′1)| ≤ γ and thus | k̃R(x1,x2) − k̃R(x′1,x′2)| ≤ 3γ , |k(x1,x2) − k(x′1,x′2)| ≤ 3γ for γ chosen to be ≤ 1. This gives us |∆R(x1,x2) − ∆R(x′1,x′2)| ≤ 6γ (14) Combining equation 13 and equation 14, we have P { max x′1∈E,x′2∈E |∆R(x′1,x′2)| > t + 6γ } ≤ 2 ( 2 )2pX,d exp(−Rt2/2). (15) Choosing = t/6γ yields the result. A.3 Proof for Corollary 1 Proof. First of all, we have 1 n n∑ i=1 `(1 n n∑ j=1 α̃j k̃(xj,xi), yi) ≤ 1 n n∑ i=1 `(1 n n∑ j=1 αj k̃(xj,xi), yi) by the optimality of {α̃j}nj=1 w.r.t. the objective using the approximate kernel. Then we have L̂( f̃R) − L̂( f̂n) ≤ 1 n n∑ i=1 `(1 n n∑ j=1 αj k̃(xj,xi), yi) − `( 1 n n∑ j=1 αj k(xj,xi), yi) ≤ M ‖α‖1 n ( max x1,x2∈X | k̃(x1,x2) − k(x1,x2)| ) ≤ M A ( max x1,x2∈X | k̃(x1,x2) − k(x1,x2)| ) where A is a bound on ‖α‖1/n. Therefore to guarantee L̂( f̃R) − L̂( f̂n) ≤ we would need ( maxi, j∈[n] |∆R(x1,x2)| ) ≤ ̂ := /M A. Then applying Theorem 2 leads to the result. B General Experimental Settings General Setup. For each method, we search for the best parameters on the training set by performing 10-fold cross validation. Following (Haasdonk & Bahlmann, 2004), we use an exact RBF kernel for DSK_RBF while choosing squared distance for DSK_ND. We use the Matlab implementation provided by Loosli et al. (2016) to run experiments for KSVM. Similarly, we adopted a simple method – random selection – to obtain R = [4, 512] data samples as the representative set for RSM (Pekalska et al., 2001). For our new method D2KE, since we generate random samples from the distribution, we can use as many as needed to achieve performance close to an exact kernel. We report the best number in the range R = [4, 4096] (typically the larger R is, the better the accuracy). We employ a linear SVM implemented using LIBLINEAR (Fan et al., 2008) for all embedding-based methods (RSM, and D2KE) and use LIBSVM (Chang & Lin, 2011) for precomputed dissimilairty kernels (DSK_RBF, DSK_ND, and KSVM). All datasets are collected from popular public websites for Machine Learning and Data Science research, including the UCI Machine Learning repository (Frank & Asuncion, 2010), the LibSVM Data Collection (Chang & Lin, 2011), and the Kaggle Datasets, except one time-series dataset IQ that is shared from researchers fromGeorgeMason University. Table 6 lists the detailed properties of the datasets from four different domains. All computations were carried out on a DELL dual-socket system with Intel Xeon processors at 2.93GHz for a total of 16 cores and 250 GB of memory, running the SUSE Linux operating system. To accelerate the computation of all methods, we used multithreading with 12 threads total for various distance computations in all experiments. C Detailed Experimental Results on Time-Series, Strings, and Images C.1 Results on multivariate time-series Setup. For time-series data, we employed the most successful distance measure - DTW - for all methods. For all datasets, a Gaussian distribution was found to be applicable, parameterized by its bandwidth σ. The best values for σ and for the length of random time series were searched in the ranges [1e-3 1e3] and [2 50], respectively. Results. As shown in Table 2, D2KE can consistently outperform or match all other baselines in terms of classification accuracywhile requiring far less computation time formultivariate time-series. The first interesting observation is that our method performs substantially better than KNN, often by a large margin, i.e., D2KE achieves 26.62% higher performance than KNN on IQ_radio. This is because KNN is sensitive to the data noise common in real-world applications like IQ_radio, and has notoriously poor performance for high-dimensional data sets like Auslan. Moreover, compared to the two distance substitution kernels DSK_RBF and DSK_ND, and KSVM operating directly on indefinite similarity matrix, our method can achieve much better performance, suggesting that a representation induced from a truly p.d. kernel makes significantly better use of the data than indefinite kernels. Among all methods, RSM is closest to our method in terms of practical construction of the feature matrix. However, the random time series sampled by D2KE performs significantly better, as we discussed in section 4. First, RSM simply chooses a subset of the original data points and computes the distances between the whole dataset and this representative set; this may suffer significantly from noise or redundant information in the time-series. In contrast, our method samples a short random sequence that could both denoise and find the patterns in the data. Second, the number of data points that can be sampled is limited by the total size of the data while the number of possible random sequences drawn from the distribution is unlimited, making the feature space much more abundant. Third, RSMmay incur significant computational cost for long time-series, due to its quadratic complexity in length. C.2 Results on strings Setup. For string data, there are various well-known edit distances. Here, we choose Levenshtein distance as our distance measure since it can capture global alignments of the underlying strings. We first compute the alphabet from the original data and then uniformly sample characters from this alphabet to generate random strings. We search for the best parameters for γ in the range [1e-5 1], and for the length of random strings in the range [2 50], respectively. Results. As shown in Table 3, D2KE consistently performs better than or similarly to other distancebased baselines. Unlike the previous experiments where DTW is not a distance metric, Levenshtein distance is indeed a distance metric; this helps improve the performance of our baselines. However, D2KE still offers a clear advantage over baseline. It is interesting to note that the performance of DSK_RBF is quite close to our method’s, which may be due to DKS_RBF with Levenshtein distance producing a c.p.d. kernel which can essentially be converted into a p.d. kernel. Notice that on relatively large datasets, our method, D2KE, can achieve better performance, and often with far less computation than other baselines with quadratic complexity in both number and length of data samples. For instance, on mnist-str8 D2KE obtains higher accuracy with an order of magnitude less runtime compared to DSK_RBF and DSK_ND, and two orders of magnitude less than KSVM, due to higher computational costs both for kernel matrix construction and for eigendecomposition. C.3 Results on sets of Word Vectors for text Setup. For text data, following (Kusner et al., 2015) we use the earth mover’s distance as our distance measure between two documents, since this distance has recently demonstrated a strong performance when combining with KNN for document classifications. We first compute the Bag of Words for each document and represent each document as a histogram of word vectors, where google pretrained word vectors with dimension size 300 is used. We generate random documents consisting of each random word vectors uniformly sampled from the unit sphere of the embedding vector space R300. We search for the best parameters for γ in the range [1e-2 1e1], and for length of random document in the range [3 21]. Results. As shown in Table 4, D2KE outperforms other baselines on all four datasets. First of all, all distance based kernel methods perform better than KNN, illustrating the effectiveness of SVM over KNN on text data. Interestingly, D2KE also performs significantly better than other baselines by a notiably margin, in large part because document classification mainly associates with "topic" learning where our random documents of short length may fit this task particularly well. For the datasets with large number of documents and longer length of document, D2KE achieves about one order of magnitude speedup compared with other exact kernel/similarity methods, thanks to the use of random features in D2KE. C.4 Results on sets of SIFT-descriptors for images Setup. For image data, following (Pekalska et al., 2001; Haasdonk & Bahlmann, 2004) we use the modified Hausdorff distance (MHD) (Dubuisson & Jain, 1994) as our distance measure between images, since this distance has shown excellent performance in the literature (Sezgin & Sankur, 2004; Gao et al., 2012). We first applied the open-source OpenCV library to generate a sequence of SIFT-descriptors with dimension 128, then MHD to compute the distance between sets of SIFTdescriptors. We generate random images of each SIFT-descriptor uniformly sampled from the unit sphere of the embedding vector space R128. We search for the best parameters for γ in the range [1e-3 1e1], and for length of random SIFT-descriptor sequence in the range [3 15]. Results. As shown in Table 5, D2KE performance outperforms or matches other baselines in all cases. First, D2KE performs best in three cases while DSK_RBF is the best on dataset decor. This may be because the underlying SIFT features are not good enough and thus random features is not effective to find the good patterns quickly in images. Nevertheless, the quadratic complexity of DSK_RBF, DSK_ND, and KSVM in terms of both the number of images and the length of SIFT descriptor sequences makes it hard to scale to large data. Interestingly, D2KE still performs much better than KNN and RSM, which again supports our claim that D2KE can be a strong alternative to KNN and RSM across applications.
1. What is the main contribution of the paper regarding kernel functions and dissimilarity measures? 2. What are the strengths and weaknesses of the proposed approach compared to previous works such as [1-3] and [4]? 3. How does the reviewer assess the novelty and significance of the paper's content, particularly in relation to the theoretical derivations and experimental comparisons? 4. What are some potential improvements or suggestions for future research related to this topic?
Review
Review The paper proposes a kernel function based on a dissimilarity measure between a pair of instances. A general dissimilarity measure does not necessarily have properties of a metric and the standard transformations from dissimilarities to kernel or similarity functions typically result in indefinite kernel matrices. For example, the two most frequently used transformations are negative double-centering characteristic to multidimensional scaling and the exponentiation of the negative squared dissimilarities between a pair of instances (e.g., exponentiated negative squared geodesic distance defines an indefinite kernel). The main idea of the paper is to mimic the random Fourier features approximation of the stationary kernels and use dissimilarities as basis function. In particular, the proposed kernel function based on dissimilarities can be written as k(x,x')=\int \phi(w, x) \phi(w, x') p(w) dw, where \phi is a dissimilarity function and p(w) is a probability density function. This is the same format of the kernel function as the one considered in [1-3], with \phi chosen to be different from the cosine basis function. While the focus in [1-3] was on stationary kernels, their theoretical derivations and presentation was designed for general basis functions. Having this in mind, all the theoretical derivations by the authors are readily obtained from [1-3] and I fail to see any novelty here. For example, the result in Proposition 2 follows directly from [1, Claim 1] or [2]. For the proposed kernel, I also fail to see a significant difference compared to [4] where dissimilarities are used as features. In fact, for a large number of dissimilarity functions/features and l1 penalty on the linear model the approach by [4] retrieves the proposed approximate kernel with the `optimal' density function. An experiment along these lines was, for instance, reported in [5]. In the related work part, the authors also mention previous approaches for learning with indefinite kernels and that they suffer from the non-convexity of the optimization problem for risk minimization. A recent approach builds on [6] and alleviates this shortcoming with a non-convex problem for which a globally optimal solution can be found in polynomial time (e.g., see [7]). In the experiments, the most appropriate baseline would be the approach from [5] and yet the approach is not even listed in the related work section. [1] A. Rahimi and B. Recht (NIPS 2008). Random features for large-scale kernel machines. [2] A. Rahimi and B. Recht (IEEE 2008). Uniform approximation of functions with random bases. [3] A. Rahimi and B. Recht (NIPS 2009). Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. [4] T. Graepel, R. Herbrich, P. Bollmann-Sdorra, and K. Obermayer (NIPS 1999). Classification on pairwise proximity data. [5] I. Alabdulmohsin, X. Gao, and X.Z. Zhang (PMLR 2015). Support vector machines with indefinite kernels. [6] C.S. Ong, X. Mary, S. Canu, and A. Smola (ICML 2004). Learning with non-positive kernels. [7] D. Oglic and T. Gaertner (ICML 2018). Learning in Reproducing Kernel Krein Spaces.
ICLR
Title D2KE: From Distance to Kernel and Embedding via Random Features For Structured Inputs Abstract We present a new methodology that constructs a family of positive definite kernels from any given dissimilarity measure on structured inputs whose elements are either real-valued time series or discrete structures such as strings, histograms, and graphs. Our approach, which we call D2KE (from Distance to Kernel and Embedding), draws from the literature of Random Features. However, instead of deriving random feature maps from a user-defined kernel to approximate kernel machines, we build a kernel from a random feature map, that we specify given the distance measure. We further propose use of a finite number of random objects to produce a random feature embedding of each instance. We provide a theoretical analysis showing that D2KE enjoys better generalizability than universal Nearest-Neighbor estimates. On one hand, D2KE subsumes the widely-used representative-set method as a special case, and relates to the well-known distance substitution kernel in a limiting case. On the other hand, D2KE generalizes existing Random Features methods applicable only to vector input representations to complex structured inputs of variable sizes. We conduct classification experiments over such disparate domains as time series, strings, and histograms (for texts and images), for which our proposed framework compares favorably to existing distance-based learning methods in terms of both testing accuracy and computational time. 1 Introduction In many problem domains, it is easier to specify a reasonable dissimilarity (or similarity) function between instances than to construct a feature representation. This is particularly the case with structured inputs whose elements are either real-valued time series or discrete structures such as strings, histograms, and graphs, where it is typically less than clear how to construct the representation of entire structured inputs with potentially widely varying sizes, even when given a good feature representation of each individual component. Moreover, even for complex structured inputs, there are many well-developed dissimilarity measures, such as the Dynamic Time Warping measure between time series, Edit Distance between strings, Hausdorff distance between sets, andWasserstein distance between distributions. However, standard machine learning methods are designed for vector representations, and classically there has been far less work on distance-based methods for either classification or regression on structured inputs. The most common distance-based method is Nearest-Neighbor Estimation (NNE), which predicts the outcome for an instance using an average of its nearest neighbors in the input space, with nearness measured by the given dissimilarity measure. Estimation from nearest neighbors, however, is unreliable, specifically having high variance when the neighbors are far apart, which is typically the case when the intrinsic dimension implied by the distance is large. To address this issue, a line of research has focused on developing global distance-based (or similaritybased) machine learning methods (Pkkalska & Duin, 2005; Duin & Pękalska, 2012; Balcan et al., 2008a; Cortes et al., 2012), in large part by drawing upon connections to kernel methods (Scholkopf et al., 1999) or directly learning with similarity functions (Balcan et al., 2008a; Cortes et al., 2012; Balcan et al., 2008b; Loosli et al., 2016); we refer the reader in particular to the survey in (Chen et al., 2009a). Among these, the most direct approach treats the data similarity matrix (or transformed dissimilarity matrix) as a kernel Gram matrix, and then uses standard kernel-based methods such as Support Vector Machines (SVM) or kernel ridge regression with this Gram matrix. A key caveat with this approach however is that most similarity (or dissimilarity) measures do not provide a positive-definite (PD) kernel, so that the empirical risk minimization problem is not well-defined, and moreover becomes non-convex (Ong et al., 2004; Lin & Lin, 2003). A line of work has therefore focused on estimating a positive-definite (PD) Gram matrix that merely approximates the similarity matrix. This could be achieved for instance by clipping, or flipping, or shifting eigenvalues of the similarity matrix (Pekalska et al., 2001), or explicitly learning a PD approximation of the similarity matrix (Chen &Ye, 2008; Chen et al., 2009b). Such modifications of the similaritymatrix however often leads to a loss of information; moreover, the enforced PD property is typically guaranteed to hold only on the training data, resulting in an inconsistency between the set of testing and training samples (Chen et al., 2009a) 1. Another common approach is to select a subset of training samples as a held-out representative set, and use distances or similarities to structured inputs in the set as the feature function (Graepel et al., 1999; Pekalska et al., 2001). As we will show, with proper scaling, this approach can be interpreted as a special instance of our framework. Furthermore, our framework provides a more general and richer family of kernels, many of which significantly outperform the representative-set method in a variety of application domains. To address the aforementioned issues, in this paper, we propose a novel general framework that constructs a family of PD kernels from a dissimilarity measure on structured inputs. Our approach, which we call D2KE (fromDistance to Kernel and Embedding), draws from the literature of Random Features (Rahimi & Recht, 2008), but instead of deriving feature maps from an existing kernel for approximating kernel machines, we build novel kernels from a random feature map specifically designed for a given distance measure. The kernel satisfies the property that functions in the corresponding Reproducing Kernel Hilbert Space (RKHS) are Lipschitz-continuous w.r.t. the given distance measure. We also provide a tractable estimator for a function from this RKHS which enjoys much better generalization properties than nearest-neighbor estimation. Our framework produces a feature embedding and consequently a vector representation of each instance that can be employed by any classification and regression models. In classification experiments in such disparate domains as strings, time series, and histograms (for texts and images), our proposed framework compares favorably to existing distance-based learning methods in terms of both testing accuracy and computational time, especially when the number of data samples is large and/or the size of structured inputs is large. We highlight our main contributions as follows: • From the perspective of distance kernel learning, we propose for the first time amethodology that constructs a family of PD kernels via Random Features from a given distance measure for structured inputs, and provide theoretical and empirical justifications for this framework. • From the perspective of Random Features (RF) methods, we generalize existing Random Features methods applied only to vector input representations to complex structured inputs of variable sizes. To the best of our knowledge, this is the first time that a generic RFmethod has been used to accelerate kernel machines on structured inputs across a broad range of domains such as time-series, strings, and the histograms. 2 Related Work Distance-Based Kernel Learning. Existing approaches either require strict conditions on the distance function (e.g. that the distance be isometric to the square of the Euclidean distance) (Haasdonk & Bahlmann, 2004; Schölkopf, 2001), or construct empirical PD Gram matrices that do not necessarily generalize to the test samples (Pekalska et al., 2001; Pkkalska & Duin, 2005; Pekalska & Duin, 2006; 2008; Duin & Pękalska, 2012). Haasdonk & Bahlmann (2004) and Schölkopf (2001) provide conditions under which one can obtain a PD kernel through simple transformations of the distance measure, but which are not satisfied for many commonly used dissimilarity measures such as Dynamic TimeWarping, Hausdorff distance, and EarthMover’s distance (Haasdonk &Bahlmann, 1A generalization error bound was provided for the similarity-as-kernel approach in (Chen et al., 2009a), but only for a positive-definite similarity function. 2004). Equivalently, one could also find a Euclidean embedding (also known as dissimilarity representation) approximating the dissimilarity matrix as in Multidimensional Scaling (Pekalska et al., 2001; Pkkalska & Duin, 2005; Pekalska & Duin, 2006; 2008; Duin & Pękalska, 2012) 2. Differently, Loosli et al. (2016) presented a theoretical foundation for an SVM solver in Krein spaces and directly evaluated a solution that uses the original (indefinite) similarity measure. There are also some specific approaches dedicated to building a PD kernel on some structured inputs such as text and time-series (Collins & Duffy, 2002; Cuturi, 2011), that modify a distance function over sequences to a kernel by replacing the minimization over possible alignments into a summation over all possible alignments. This type of kernel, however, results in a diagonal-dominance problem, where the diagonal entries of the kernel Gram matrix are orders of magnitude larger than the off-diagonal entries, due to the summation over a huge number of alignments with a sample itself. Random Features Methods. Interest in approximating non-linear kernel machines using randomized feature maps has surged in recent years due to a significant reduction in training and testing times for kernel based learning algorithms (Dai et al., 2014). There are numerous explicit nonlinear random feature maps that have been constructed for various types of kernels, including Gaussian and Laplacian Kernels (Rahimi & Recht, 2008; Wu et al., 2016), intersection kernels (Maji & Berg, 2009), additive kernels Vedaldi & Zisserman (2012), dot product kernels (Kar & Karnick, 2012; Pennington et al., 2015), and semigroup kernels (Mukuta et al., 2018). Among them, the Random Fourier Features (RFF) method, which approximates a Gaussian Kernel function by means of multiplying the input with a Gaussian random matrix, and its fruitful variants have been extensively studied both theoretically and empirically (Sriperumbudur & Szabó, 2015; Felix et al., 2016; Rudi & Rosasco, 2017; Bach, 2017; Choromanski et al., 2018). To accelerate the RFF on input data matrix with high dimensions, a number of methods have been proposed to leverage structured matrices to allow faster matrix computation and less memory consumption (Le et al., 2013; Hamid et al., 2014; Choromanski & Sindhwani, 2016). However, all the aforementioned RFmethods merely consider inputs with vector representations, and compute the RF by a linear transformation that is either a matrix multiplication or an inner product under Euclidean distance metric. In contrast, D2KE takes structured inputs of potentially different sizes and computes the RF with a structured distance metric (typically with dynamic programming or optimal transportation). Another important difference between D2KE and existing RF methods lies in the fact that existing RF work assumes a user-defined kernel and then derives a randomfeature map, while D2KE constructs a new PD kernel through a random feature map and makes it computationally feasible via RF. The table 1 lists the differences between D2KE and existing RF methods. A very recent piece of work (Wu et al., 2018) has developed a kernel and a specific algorithm for computing embeddings of single-variable real-valued time-series. However, despite promising results, this method cannot be applied on discrete structured inputs such as strings, histograms, and graphs. In contrast, we have an unified framework for various structured inputs beyond the limits of (Wu et al., 2018) and provide a general theoretical analysis w.r.t KNN and other generic distance-based kernel methods. 3 Problem Setup We consider the estimation of a target function f : X → R from a collection of samples {(xi, yi)}ni=1, where xi ∈ X is the structured input object, and yi ∈ Y is the output observation associated with the target function f (xi). For instance, in a regression problem, yi ∼ f (xi) + ωi ∈ R for some random noise ωi , and in binary classification, we have yi ∈ {0, 1} with P(yi = 1|xi) = f (xi). We are given a dissimilarity measure d : X ×X → R between input objects instead of a feature representation of x. 2A proof of the equivalence between PD of similarity matrix and Euclidean of dissimilarity matrix can be found in (Borg & Groenen, 1997). Note that the size structured inputs xi may vary widely, e.g. strings with variable lengths or graphs with different sizes. For some of the analyses, we require the dissimilarity measure to be a metric as follows. Assumption 1 (Distance Metric). d : X × X → R is a distance metric, that is, it satisfies (i) d(x1,x2) ≥ 0, (ii) d(x1,x2) = 0 ⇐⇒ x1 = x2, (iii) d(x1,x2) = d(x2,x1), and (iv) d(x1,x2) ≤ d(x1,x3) + d(x3,x2). 3.1 Function Continuity and Space Covering An ideal feature representation for the learning task is (i) compact and (ii) such that the target function f (x) is a simple (e.g. linear) function of the resulting representation. Similarly, an ideal dissimilarity measure d(x1,x2) for learning a target function f (x) should satisfy certain properties. On one hand, a small dissimilarity d(x1,x2) between two objects should imply small difference in the function values | f (x1) − f (x2)|. On the other hand, we want a small expected distance among samples, so that the data lies in a compact space of small intrinsic dimension. We next build up some definitions to formalize these properties. Assumption 2 (Lipschitz Continuity). For any x1,x2 ∈ X, there exists some constant L > 0 such that | f (x1) − f (x2)| ≤ L d(x1,x2), (1) We would prefer the target function to have a small Lipschitz-continuity constant L with respect to the dissimilarity measure d(., .). Such Lipschitz-continuity alone however might not suffice. For example, one can simply set d(x1,x2) = ∞ for any x1 , x2 to satisfy Eq. equation 1. We thus need the following quantity that measures the size of the space implied by a given dissimilarity measure. Definition 1 (Covering Number). Assuming d is a metric. A δ-cover of X w.r.t. d(., .) is a set E s.t. ∀x ∈ X, ∃xi ∈ E, d(x,xi) ≤ δ. Then the covering number N(δ;X, d) is the size of the smallest δ-cover for Xwith respect to d. Assuming the input domain X is compact, the covering number N(δ;X, d) measures its size w.r.t. the distance measure d. We show how the two quantities defined above affect the estimation error of a Nearest-Neighbor Estimator. 3.2 Effective Dimension and Nearest Neighbor Estimation We extend the standard analysis of the estimation error of k-nearest-neighbor from finite-dimensional vector spaces to any structured input space X, with an associated distance measure d, and a finite covering number N(δ;X, d), by defining the effective dimension as follows. Assumption 3 (Effective Dimension). Let the effective dimension pX,d > 0 be the minimum p satisfying ∃c > 0, ∀δ : 0 < δ < 1, N(δ;X, d) ≤ c ( 1 δ )p . Here we provide an example of effective dimension in case of measuring the space of Multiset. Multiset with Hausdorff Distance. A multiset is a set that allows duplicate elements. Consider two multisets x1 = {ui}Mi=1, x2 = {vj}Nj=1. Let ∆(ui, vj) be a ground distance that measures the distance between two elements ui, vj ∈ V in a set. The (modified) Hausdorff Distance (Dubuisson & Jain, 1994) can be defined as d(x1,x2) := max{ 1 N N∑ i=1 min j∈[M] ∆(ui, vj), 1 M M∑ j=1 min i∈[N ] ∆(vj,ui)} (2) Let N(δ;V,∆) be the covering number of V under the ground distance ∆. Let X denote the set of all sets of size bounded by L. By constructing a covering of X containing any set of size less or equal than L with its elements taken from the covering of V, we have N(δ;X, d) ≤ N(δ;V;∆)L . Therefore, pX,d ≤ L log N(δ;V,∆). For example, ifV := {v ∈ Rp | ‖v‖2 ≤ 1} and ∆ is Euclidean distance, we have N(δ;V,∆) = (1 + 2δ )p and pX,d ≤ Lp. Equippedwith the concept of effective dimension, we can obtain the following bound on the estimation error of the k-Nearest-Neighbor estimate of f (x). Theorem 1. LetVar(y | f (x)) ≤ σ2, and f̂n be the k-Nearest Neighbor estimate of the target function f constructed from a training set of size n. Denote p := pX,d . We have Ex [( f̂n(x) − f (x) )2] ≤ σ 2 k + cL2 ( k n )2/p for some constant c > 0. For σ > 0, minimizing RHS w.r.t. the parameter k, we have Ex [( f̂n(x) − f (x) )2] ≤ c2σ 4 p+2 L 2p 2+p ( 1 n ) 2 2+p (3) for some constant c2 > 0. Proof. The proof is almost the same to a standard analysis of k-NN’s estimation error in, for example, (Györfi et al., 2006), with the space partition number replaced by the covering number, and dimension replaced by the effective dimension in Assumption 3. When pX,d is reasonably large, the estimation error of k-NN decreases quite slowly with n. Thus, for the estimation error to be bounded by , requires the number of samples to scale exponentially in pX,d . In the following sections, we develop an estimator f̂ based on a RKHS derived from the distance measure, with a considerably better sample complexity for problems with higher effective dimension. 4 From Distance to Kernel for Structured Inputs We aim to address the long-standing problem of how to convert a distance measure into a positivedefinite kernel. Here we introduce a simple but effective approach D2KE that constructs a family of positive-definite kernels from a given distance measure. Given an structured input domain X and a distance measure d(., .), we construct a family of kernels as k(x, y) := ∫ p(ω)φω(x)φω(y)dω,where φω(x) := exp(−γd(x,ω)), (4) whereω ∈ Ω is a random structured object whose elements could be real-valued time-series, strings, and histograms, p(ω) is a distribution over Ω, and φω(x) is a feature map derived from the distance of x to all random objects ω ∈ Ω. The kernel is parameterized by both p(ω) and γ. Relationship toDistance SubstitutionKernel. An insightful interpretation of the kernel in Equation (4) can be obtained by expressing the kernel in Equation (4) as exp ( −γsoftminp(ω){d(x,ω) + d(ω, y)} ) (5) where the soft minimum function, parameterized by p(ω) and γ, is defined as softminp(ω) f (ω) := − 1 γ log ∫ p(ω)e−γ f (ω)dω. (6) Therefore, the kernel k(x, y) can be interpreted as a soft version of the distance substitution kernel (Haasdonk & Bahlmann, 2004), where instead of substituting d(x, y) into the exponent, it substitutes a soft version of the form softminp(ω){d(x,ω) + d(ω, y)}. (7) Note when γ → ∞, the value of Equation (7) is determined by minω∈Ω d(x,ω) + d(ω, y), which equals d(x, y) if X ⊆ Ω, since it cannot be smaller than d(x, y) by the triangle inequality. In other words, when X ⊆ Ω, k(x, y) → exp(−γd(x, y)) as γ →∞. On the other hand, unlike the distance-substituion kernel, our kernel in Equation (5) is always PD by construction. Algorithm 1 Random Feature Approximation of function in RKHS with the kernel in Equation 4 1: Draw R samples from p(ω) to get {ωj}Rj=1. 2: Set the R-dimensional feature embedding as φ̂j(x) = 1√ R exp(−γd(x,ωj)), ∀ j ∈ [R] 3: Solve the following problem for some µ > 0: ŵ := argmin w∈RR 1 n n∑ i=1 `(wT φ̂(xi), yi) + µ 2 ‖w‖2 4: Output the estimated function f̃R(x) := ŵT φ̂(x). Random Feature Approximation. The reader might have noticed that the kernel in Equation (4) cannot be evaluated analytically in general. However, this does not prohibit its use in practice, so long as we can approximate it via Random Features (RF) (Rahimi & Recht, 2008), which in our case is particularly natural as the kernel itself is defined via a random feature map. Thus, our kernel with the RF approximation can not only be used in small problems but also in large-scale settings with a large number of samples, where standard kernel methods with O(n2) complexity are no longer efficient enough and approximation methods, such as Random Features, must be employed (Rahimi & Recht, 2008). Given the RF approximation, one can then directly learn a target function as a linear function of the RF feature map, by minimizing a domain-specific empirical risk. It is worth noting that a recent work (Sinha & Duchi, 2016) that learns to select a set of random features by solving an optimization problem in an supervised setting is orthogonal to our D2KE approach and could be extended to develop a supervised D2KE method. We outline this overall RF based empirical risk minimization for our class of D2KE kernels in Algorithm 1. It is worth pointing out that in line 2 of Algorithm 1 the random feature embeddings are computed by a structured distance measure between the original structured inputs and the generated random structured inputs, followed by the application of the exponent function parameterized by γ. This is in contrast with traditional RF methods that translate the input data matrix into the embedding matrix via a matrix multiplication with random Gaussian matrix followed by a non-linearity. We will provide a detailed analysis of our estimator in Algorithm 1 in Section 5, and contrast its statistical performance to that of K-nearest-neighbor. Relationship to Representative-Set Method. A naive choice of p(ω) relates our approach to the representative-set method (RSM): setting Ω = X, with p(ω) = p(x). This gives us a kernel Equation (4) that depends on the data distribution. One can then obtain a Random-Feature approximation to the kernel in Equation (4) by holding out a part of the training data {x̂j}Rj=1 as samples from p(ω), and creating an R-dimensional feature embedding of the form: φ̂ j(x) := 1√ R exp ( −γd(x, x̂j) ) , j ∈ [R], (8) as in Algorithm 1. This is equivalent to a 1/ √ R-scaled version of the embedding function in the representative-set method (or similarity-as-features method) (Graepel et al., 1999; Pekalska et al., 2001; Pkkalska & Duin, 2005; Pekalska & Duin, 2006; 2008; Chen et al., 2009a; Duin & Pękalska, 2012), where one computes each sample’s similarity to a set of representatives as its feature representation. However, here by interpreting Equation (8) as a random-feature approximation to the kernel in Equation (4), we obtain a much nicer generalization error bound even in the case R→ ∞. This is in contrast to the analysis of RSM in (Chen et al., 2009a), where one has to keep the size of the representative set small (of the order O(n)) in order to have reasonable generalization performance. Effect of p(ω). The choice of p(ω) plays an important role in our kernel. Surprisingly, we found that many “close to uniform” choices of p(ω) in a variety of domains give better performance than for instance the choice of the data distribution p(ω) = p(x) (as in the representative-setmethod). Here are some examples from our experiments: i) In the time-series domain with dissimilarity computed via Dynamic Time Warping (DTW), a distribution p(ω) corresponding to random time series of length uniform in ∈ [2, 10], and with Gaussian-distributed elements, yields much better performance than the Representative-Set Method (RSM); ii) In string classification, with edit distance, a distribution p(ω) corresponding to random strings with elements uniformly drawn from the alphabet Σ yields much better performance than RSM; iii) When classifying sets of vectors with the Hausdorff distance in Equation (2), a distribution p(ω) corresponding to random sets of size uniform in ∈ [3, 15] with elements drawn uniformly from a unit sphere yields significantly better performance than RSM. We conjecture two potential reasons for the better performance of the chosen distributions p(ω) in these cases, though a formal theoretical treatment is an interesting subject we defer to future work. Firstly, as p(ω) is synthetic, one can generate unlimited number of random features, which results in a much better approximation to the exact kernel in Equation (4). In contrast, RSM requires held-out samples from the data, which could be quite limited for a small data set. Second, in some cases, even with a small or similar number of random features to RSM, the performance of the selected distribution still leads to significantly better results. For those cases we conjecture that the selected p(ω) generates objects that capture semantic information more relevant to the estimation of f (x), when coupled with our feature map under the dissimilarity measure d(x,ω). 5 Analysis In this section, we analyze the proposed framework from the perspectives of error decomposition. LetH be the RKHS corresponding to the kernel in Equation (4). Let fC := argmin f ∈H E[`( f (x), y)] s.t .‖ f ‖H ≤ C (9) be the population risk minimizer subject to the RKHS norm constraint ‖ f ‖H ≤ C. And let f̂n := argmin f ∈H 1 n n∑ i=1 `( f (xi), yi) s.t .‖ f ‖H ≤ C (10) be the corresponding empirical risk minimizer. In addition, let f̃R be the estimated function from our random feature approximation (Algorithm 1). Then denote the population and empirical risks as L( f ) and L̂( f ) respectively. We have the following risk decomposition L( f̃R) − L( f ) = (L( f̃R) − L( f̂n))︸ ︷︷ ︸ randomf eature + (L( f̂n) − L( fC))︸ ︷︷ ︸ estimation + (L( fC) − L( f ))︸ ︷︷ ︸ approximation In the following, we will discuss the three terms from the rightmost to the leftmost. Function Approximation Error. The RKHS implied by the kernel in Equation (4) is H := f f (x) = m∑j=1 αj k(xj,x), xj ∈ X, ∀ j ∈ [m], m ∈ N , which is a smaller function space than the space of Lipschitz-continuous function w.r.t. the distance d(x1,x2). As we show, any function f ∈ H is Lipschitz-continous w.r.t. the distance d(., .). Proposition 1. LetH be the RKHS corresponding to the kernel in Equation (4) derived from some metric d(., .). For any f ∈ H , | f (x1) − f (x2)| ≤ L f d(x1,x2) where L f = γC. We refer readers to the detailed proof in Appendix A.1. While any f in the RKHS is Lipschitzcontinuous w.r.t. the given distance d(., .), we are interested in imposing additional smoothness via the RKHS norm constraint ‖ f ‖H ≤ C, and by the kernel parameter γ. The hope is that the best function fC within this class approximates the true function f well in terms of the approximation error L( fC) − L( f ). The stronger assumption made by the RKHS gives us a qualitatively better estimation error, as discussed below. Estimation Error. Define Dλ as Dλ := ∞∑ j=1 1 1 + λ/µj where {µj}∞j=1 is the eigenvalues of the kernel in Equation (5) and λ is a tuning parameter. It holds that for any λ ≥ Dλ/n, with probability at least 1 − δ, L( f̂n) − L( fC) ≤ c(log 1δ )2C2λ for some universal constant c (Zhang, 2005). Here we would like to set λ as small as possible (as a function of n). By using the following kernel-independent bound: Dλ ≤ 1/λ, we have λ = 1/ √ n and thus a bound on the estimation error L( f̂n) − L( fC) ≤ c(log 1 δ )2C2 √ 1 n . (11) The estimation error is quite standard for a RKHS estimator. It has a much better dependency w.r.t. n (i.e. n−1/2) compared to that of k-nearest-neighbor method (i.e. n−2/(2+pX,d )) especially for higher effective dimension. A more careful analysis might lead to tighter bound on Dλ and also a better rate w.r.t. n. However, the analysis of Dλ for our kernel in Equation (4) is much more difficult than that of typical cases as we do not have an analytic form of the kernel. Random Feature Approximation. Denote L̂(.) as the empirical risk function. The error from RF approximation L( f̃R) − L( f̂n) can be further decomposed as (L( f̃R) − L̂( f̃R)) + (L̂( f̃R) − L̂( f̂n)) + (L̂( f̂n) − L( f̂n)) where the first and third terms can be bounded via the same estimation error bound in Equation (11), as both f̃R and f̂n have RKHS norm bounded by C. Therefore, in the following, we focus only on the second term of empirical risk. We start by analyzing the approximation error of the kernel ∆R(x1,x2) = k̃R(x1,x2) − k(x1,x2) where k̃R(x1,x2) := 1 R R∑ j=1 φ j(x1)φ j(x2). (12) Proposition 2. Let ∆R(x1,x2) = k(x1,x2) − k̃(x1,x2), we have uniform convergence of the form P { max x1,x2∈X |∆R(x1,x2)| > 2t } ≤ 2 ( 12γ t )2pX,d e−Rt 2/2, where pX,d is the effective dimension of X under metric d(., .). In other words, to guarantee |∆R(x1,x2)| ≤ with probability at least 1 − δ, it suffices to have R = Ω ( pX,d 2 log(γ ) + 1 2 log(1 δ ) ) . We refer readers to the detailed proof in Appendix A.2. Proposition 2 gives an approximation error in terms of kernel evaluation. To get a bound on the empirical risk L̂( f̃R) − L̂( f̂n), consider the optimal solution of the empirical risk minimization. By the Representer theorem we have f̂n(x) = 1n ∑ i αik(xi,x) and f̃R(x) = 1n ∑ i α̃i k̃(xi,x). Therefore, we have the following corollary. Corollary 1. To guarantee L̂( f̃R) − L̂( f̂n) ≤ , with probability 1 − δ, it suffices to have R = Ω ( pX,dM2 A2 2 log(γ ) + M 2 A2 2 log(1 δ ) ) . where M is the Lipschitz-continuous constant of the loss function `(., y), and A is a bound on ‖α‖1/n. We refer readers to the detailed proof in Appendix A.3. For most of loss functions, A and M are typically small constants. Therefore, Corollary 1 states that it suffices to have number of Random Features proportional to the effective dimension O(pX,d/ 2) to achieve an approximation error. Combining the three error terms, we can show that the proposed framework can achieve -suboptimal performance. Claim 1. Let f̃R be the estimated function from our random feature approximation based ERM estimator in Algorithm 1, and let f ∗ denote the desired target function. Suppose further that for some absolute constants c1, c2 > 0 (up to some logarithmic factor of 1/ and 1/δ): 1. The target function f ∗ lies close to the population risk minimizer fC lying in the RKHS spanned by the D2KE kernel: L( fC) − L( f ) ≤ /2. 2. The number of training samples n ≥ c1 C4/ 2. 3. The number of random features R ≥ c2pX,d/ 2. We then have that: L( f̃R) − L( f ∗) ≤ with probability 1 − δ. 6 Experiments We evaluate the proposed method in four different domains involving time-series, strings, texts, and images. First, we discuss the dissimilarity measures and data characteristics for each set of experiments. Then we introduce comparison among different distance-based methods and report corresponding results. Distance Measures. We have chosen three well-known dissimilarity measures: 1) Dynamic Time Warping (DTW), for time-series (Berndt & Clifford, 1994); 2) Edit Distance (Levenshtein distance), for strings (Navarro, 2001); 3) EarthMover’s distance (Rubner et al., 2000) formeasuring the semantic distance between two Bags of Words (using pretrained word vectors), for representing documents. 4) (Modified) Hausdorff distance (Huttenlocher et al., 1993; Dubuisson & Jain, 1994) for measuring the semantic closeness of two Bags of Visual Words (using SIFT vectors), for representing images. Note that Bag of (Visual)Words in 3) and 4) can also be regarded as a histogram. Since most distance measures are computationally demanding, having quadratic complexity, we adapted or implemented C-MEX programs for them; other codes were written in Matlab. Datasets. For each domain, we selected 4 datasets for our experiments. For time-series data, all are multivariate time-series and the length of each time-series varies from 2 to 205 observations; three are from the UCI Machine Learning repository (Frank & Asuncion, 2010), the other is generated from the IQ (In-phase and Quadrature components) samples from a wireless line-of-sight communication system from GMU. For string data, the size of alphabet is between 4 and 8 and the length of each string ranges from 34 to 198; two of them are from the UCI Machine Learning repository and the other two from the LibSVM Data Collection (Chang & Lin, 2011). For text data, all are chosen partially overlapped with these in (Kusner et al., 2015). The length of each document varies from 9.9 to 117. For image data, all of datasets were derived from Kaggle; we computed a set of SIFTdescriptors to represent each image and the size of SIFT feature vectors of each image varies from 1 to 914. We divided each dataset into 70/30 train and test subsets (if there was no predefined train/test split). Properties of these datasets are summarized in Table 6 in Appendix B. Baselines. We compare D2KE against 5 state-of-the-art baselines, including 1) KNN: a simple yet universal method to apply any distance measure to classification tasks; 2) DSK_RBF (Haasdonk & Bahlmann, 2004): distance substitution kernels, a general framework for kernel construction by substituting a problem specific distance measure in ordinary kernel functions. We use a Gaussian RBF kernel; 3) DSK_ND (Haasdonk & Bahlmann, 2004): another class of distance substitution kernels with negative distance; 4) KSVM (Loosli et al., 2016): learning directly from the similarity (indefinite) matrix followed in the original Krein Space; 5) RSM (Pekalska et al., 2001): building an embedding by computing distances from randomly selected representative samples. Among these baselines, KNN, DSK_RBF, DSK_ND, and KSVM have quadratic complexity O(N2L2) in both the number of data samples and the length of the sequences, while RSM has computational complexity O(NRL2), linear in the number of data samples but still quadratic in the length of the sequence. These compare to our method, D2KE, which has complexity O(NRL), linear in both the number of data samples and the length of the sequence. For each method, we search for the best parameters on the training set by performing 10-fold cross validation. For our new method D2KE, since we generate random samples from the distribution, we can use as many as needed to achieve performance close to an exact kernel. We report the best number in the range R = [4, 4096] (typically the larger R is, the better the accuracy). We employ a linear SVM implemented using LIBLINEAR (Fan et al., 2008) for all embedding-basedmethods (RSM andD2KE) and use LIBSVM (Chang & Lin, 2011) for precomputed dissimilairty kernels (DSK_RBF, DSK_ND, and KSVM). More details of experimental setup are provided in Appendix B. Results. As shown in Tables 2, 3, 4, and 5, D2KE can consistently outperform or match the baseline methods in terms of classification accuracy while requiring far less computation time. There are several observations worth making here. First, D2KE performs much better than KNN, supporting our claim that D2KE can be a strong alternative to KNN across applications. Second, compared to the two distance substitution kernels DSK_RBF and DSK_ND and the KSVM method operating directly on indefinite similarity matrix, our method can achieve much better performance, suggesting that a representation induced from a truly PD kernel makes significantly better use of the data than indefinite kernels. Among all methods, RSM is closest to our method in terms of practical construction of the feature matrix. However, the random objects (time-series, strings, or sets) sampled by D2KE perform significantly better, as we discussed in section 4. More detailed discussions of the experimental results for each domain are given in Appendix C. 7 Conclusion and Future Work In thiswork, we have proposed a general framework for deriving a positive-definite kernel and a feature embedding function from a given dissimilarity measure between input objects. The framework is especially useful for structured input domains such as sequences, time-series, and sets, where many well-established dissimilarity measures have been developed. Our framework subsumes at least two existing approaches as special or limiting cases, and opens up what we believe will be a useful new direction for creating embeddings of structured objects based on distance to random objects. A promising direction for extension is to develop such distance-based embeddings within a deep architecture to support use of structured inputs in an end-to-end learning system. A Proof of Theorem 1 and Theorem 2 A.1 Proof of Theorem 1 Proof. Note the function g(t) = exp(−γt) is Lipschitz-continuous with Lipschitz constant γ. Therefore, | f (x1) − f (x2)| = |〈 f , φ(x1) − φ(x2)〉| ≤ ‖ f ‖H ‖φ(x1) − φ(x2)‖H = ‖ f ‖H √∫ ω p(ω)(φω(x1) − φω(x2))2dω ≤ ‖ f ‖H √∫ ω p(ω)γ2 |d(x1,ω) − d(x2,ω)|2dω ≤ γ‖ f ‖H √∫ ω p(ω)d(x1,x2)2dω ≤ γ‖ f ‖Hd(x1,x2) ≤ γCd(x1,x2) A.2 Proof of Theorem 2 Proof. Our goal is to bound the magnitude of ∆R(x1,x2) = k̃R(x1,x2) − k(x1,x2). Since E[∆R(x1,x2)] = 0 and |∆R(x1,x2)| ≤ 1, from Hoefding’s inequality, we have P {|∆R(x1,x2)| ≥ t} ≤ 2 exp(−Rt2/2) a given input pair (x1,x2). To get a unim bound that holds ∀(x1,x2) ∈ X ×X, we find an -covering E of X w.r.t. d(., .) of size N( ,X, d). Applying union bound over the -covering E for x1 and x2, we have P { max x′1∈E,x′2∈E |∆R(x′1,x′2)| > t } ≤ 2|E |2 exp(−Rt2/2). (13) Then by the definition of E we have |d(x1,ω) − d(x′1,ω)| ≤ d(x1,x′1) ≤ . Together with the fact that exp(−γt) is Lipschitz-continuous with parameter γ for t ≥ 0, we have |φω(x1) − φω(x′1)| ≤ γ and thus | k̃R(x1,x2) − k̃R(x′1,x′2)| ≤ 3γ , |k(x1,x2) − k(x′1,x′2)| ≤ 3γ for γ chosen to be ≤ 1. This gives us |∆R(x1,x2) − ∆R(x′1,x′2)| ≤ 6γ (14) Combining equation 13 and equation 14, we have P { max x′1∈E,x′2∈E |∆R(x′1,x′2)| > t + 6γ } ≤ 2 ( 2 )2pX,d exp(−Rt2/2). (15) Choosing = t/6γ yields the result. A.3 Proof for Corollary 1 Proof. First of all, we have 1 n n∑ i=1 `(1 n n∑ j=1 α̃j k̃(xj,xi), yi) ≤ 1 n n∑ i=1 `(1 n n∑ j=1 αj k̃(xj,xi), yi) by the optimality of {α̃j}nj=1 w.r.t. the objective using the approximate kernel. Then we have L̂( f̃R) − L̂( f̂n) ≤ 1 n n∑ i=1 `(1 n n∑ j=1 αj k̃(xj,xi), yi) − `( 1 n n∑ j=1 αj k(xj,xi), yi) ≤ M ‖α‖1 n ( max x1,x2∈X | k̃(x1,x2) − k(x1,x2)| ) ≤ M A ( max x1,x2∈X | k̃(x1,x2) − k(x1,x2)| ) where A is a bound on ‖α‖1/n. Therefore to guarantee L̂( f̃R) − L̂( f̂n) ≤ we would need ( maxi, j∈[n] |∆R(x1,x2)| ) ≤ ̂ := /M A. Then applying Theorem 2 leads to the result. B General Experimental Settings General Setup. For each method, we search for the best parameters on the training set by performing 10-fold cross validation. Following (Haasdonk & Bahlmann, 2004), we use an exact RBF kernel for DSK_RBF while choosing squared distance for DSK_ND. We use the Matlab implementation provided by Loosli et al. (2016) to run experiments for KSVM. Similarly, we adopted a simple method – random selection – to obtain R = [4, 512] data samples as the representative set for RSM (Pekalska et al., 2001). For our new method D2KE, since we generate random samples from the distribution, we can use as many as needed to achieve performance close to an exact kernel. We report the best number in the range R = [4, 4096] (typically the larger R is, the better the accuracy). We employ a linear SVM implemented using LIBLINEAR (Fan et al., 2008) for all embedding-based methods (RSM, and D2KE) and use LIBSVM (Chang & Lin, 2011) for precomputed dissimilairty kernels (DSK_RBF, DSK_ND, and KSVM). All datasets are collected from popular public websites for Machine Learning and Data Science research, including the UCI Machine Learning repository (Frank & Asuncion, 2010), the LibSVM Data Collection (Chang & Lin, 2011), and the Kaggle Datasets, except one time-series dataset IQ that is shared from researchers fromGeorgeMason University. Table 6 lists the detailed properties of the datasets from four different domains. All computations were carried out on a DELL dual-socket system with Intel Xeon processors at 2.93GHz for a total of 16 cores and 250 GB of memory, running the SUSE Linux operating system. To accelerate the computation of all methods, we used multithreading with 12 threads total for various distance computations in all experiments. C Detailed Experimental Results on Time-Series, Strings, and Images C.1 Results on multivariate time-series Setup. For time-series data, we employed the most successful distance measure - DTW - for all methods. For all datasets, a Gaussian distribution was found to be applicable, parameterized by its bandwidth σ. The best values for σ and for the length of random time series were searched in the ranges [1e-3 1e3] and [2 50], respectively. Results. As shown in Table 2, D2KE can consistently outperform or match all other baselines in terms of classification accuracywhile requiring far less computation time formultivariate time-series. The first interesting observation is that our method performs substantially better than KNN, often by a large margin, i.e., D2KE achieves 26.62% higher performance than KNN on IQ_radio. This is because KNN is sensitive to the data noise common in real-world applications like IQ_radio, and has notoriously poor performance for high-dimensional data sets like Auslan. Moreover, compared to the two distance substitution kernels DSK_RBF and DSK_ND, and KSVM operating directly on indefinite similarity matrix, our method can achieve much better performance, suggesting that a representation induced from a truly p.d. kernel makes significantly better use of the data than indefinite kernels. Among all methods, RSM is closest to our method in terms of practical construction of the feature matrix. However, the random time series sampled by D2KE performs significantly better, as we discussed in section 4. First, RSM simply chooses a subset of the original data points and computes the distances between the whole dataset and this representative set; this may suffer significantly from noise or redundant information in the time-series. In contrast, our method samples a short random sequence that could both denoise and find the patterns in the data. Second, the number of data points that can be sampled is limited by the total size of the data while the number of possible random sequences drawn from the distribution is unlimited, making the feature space much more abundant. Third, RSMmay incur significant computational cost for long time-series, due to its quadratic complexity in length. C.2 Results on strings Setup. For string data, there are various well-known edit distances. Here, we choose Levenshtein distance as our distance measure since it can capture global alignments of the underlying strings. We first compute the alphabet from the original data and then uniformly sample characters from this alphabet to generate random strings. We search for the best parameters for γ in the range [1e-5 1], and for the length of random strings in the range [2 50], respectively. Results. As shown in Table 3, D2KE consistently performs better than or similarly to other distancebased baselines. Unlike the previous experiments where DTW is not a distance metric, Levenshtein distance is indeed a distance metric; this helps improve the performance of our baselines. However, D2KE still offers a clear advantage over baseline. It is interesting to note that the performance of DSK_RBF is quite close to our method’s, which may be due to DKS_RBF with Levenshtein distance producing a c.p.d. kernel which can essentially be converted into a p.d. kernel. Notice that on relatively large datasets, our method, D2KE, can achieve better performance, and often with far less computation than other baselines with quadratic complexity in both number and length of data samples. For instance, on mnist-str8 D2KE obtains higher accuracy with an order of magnitude less runtime compared to DSK_RBF and DSK_ND, and two orders of magnitude less than KSVM, due to higher computational costs both for kernel matrix construction and for eigendecomposition. C.3 Results on sets of Word Vectors for text Setup. For text data, following (Kusner et al., 2015) we use the earth mover’s distance as our distance measure between two documents, since this distance has recently demonstrated a strong performance when combining with KNN for document classifications. We first compute the Bag of Words for each document and represent each document as a histogram of word vectors, where google pretrained word vectors with dimension size 300 is used. We generate random documents consisting of each random word vectors uniformly sampled from the unit sphere of the embedding vector space R300. We search for the best parameters for γ in the range [1e-2 1e1], and for length of random document in the range [3 21]. Results. As shown in Table 4, D2KE outperforms other baselines on all four datasets. First of all, all distance based kernel methods perform better than KNN, illustrating the effectiveness of SVM over KNN on text data. Interestingly, D2KE also performs significantly better than other baselines by a notiably margin, in large part because document classification mainly associates with "topic" learning where our random documents of short length may fit this task particularly well. For the datasets with large number of documents and longer length of document, D2KE achieves about one order of magnitude speedup compared with other exact kernel/similarity methods, thanks to the use of random features in D2KE. C.4 Results on sets of SIFT-descriptors for images Setup. For image data, following (Pekalska et al., 2001; Haasdonk & Bahlmann, 2004) we use the modified Hausdorff distance (MHD) (Dubuisson & Jain, 1994) as our distance measure between images, since this distance has shown excellent performance in the literature (Sezgin & Sankur, 2004; Gao et al., 2012). We first applied the open-source OpenCV library to generate a sequence of SIFT-descriptors with dimension 128, then MHD to compute the distance between sets of SIFTdescriptors. We generate random images of each SIFT-descriptor uniformly sampled from the unit sphere of the embedding vector space R128. We search for the best parameters for γ in the range [1e-3 1e1], and for length of random SIFT-descriptor sequence in the range [3 15]. Results. As shown in Table 5, D2KE performance outperforms or matches other baselines in all cases. First, D2KE performs best in three cases while DSK_RBF is the best on dataset decor. This may be because the underlying SIFT features are not good enough and thus random features is not effective to find the good patterns quickly in images. Nevertheless, the quadratic complexity of DSK_RBF, DSK_ND, and KSVM in terms of both the number of images and the length of SIFT descriptor sequences makes it hard to scale to large data. Interestingly, D2KE still performs much better than KNN and RSM, which again supports our claim that D2KE can be a strong alternative to KNN and RSM across applications.
1. What are the strengths and weaknesses of the proposed approach in the paper "D2KE: From Distance to Kernel and Embedding"? 2. How does the reviewer assess the novelty and originality of the paper, particularly regarding its contributions to the field of machine learning? 3. What are the concerns raised by the reviewer regarding the experimental evaluation and validation of the method proposed in the paper? 4. How does the reviewer evaluate the clarity and quality of the writing in the paper, including the references and implementation details provided? 5. Are there any questions or suggestions raised by the reviewer regarding the comparisons made between different methods in the paper, including the significance of the results and the choice of datasets used?
Review
Review D2KE: From Distance to Kernel and Embedding Quality: average Originality: original Significance: relevant for ICLR Pros: - interesting idea -- see detailed comments Cons: some technical issues, validity of the results not clear -- see detailed comments I have seen this paper already at ICML as a review and I am happy to see that the authors have improved the paper. Although the core idea is interesting and novel the paper raises still a number of questions (see below). Beside of some technical bits, I am (again) unhappy with the experimental evaluation - in particular there are strange mismatches between the results reported in the ICML submission and this one. The method looks now better in the shown results - basically be removing results from ICML on another method and some additional datasets. Further some information is not provided (parametrization, standard deviation of the results, significance of the differences). Some results are changed unexpectedly and there is still no comparison to some more challenging datasets as provided by other authors from the field. comments - the paper has a number of typos in the references - please carefully check and also complete missing information - I understand that you have to promote your approach but looking on the title your objective is to go from distances to kernels --> although you go a different way (which is fine) you can also do this directly by using concepts as proposed by Gisbrecht et al. 'Metric and non-metric proximity transformations at linear costs', Neurocomputing which you should at least take into account - to sum up a bit your related work part you should refer to a recent review by Tino et. al about indefinite learning, neural computation - 'A line of work has therefore focused on estimating a positive-definite (PD) ...' - yes and by combining the approach from the Gisbrecht paper with the one of Loosli you can have e.g. an SVM in the Krein space without restriction to a single Hilbert space and with a simple and clear out of sample extension to test data. So basically there are ways to stay with an indefinite kernel (or non-metric dissimilarity), not loosing any performance and having no need to put it into a vector space. --> You could point to this option in your introduction - And as mentioned by a number of other authors (Pekalska, Tino, ...) there maybe good reasons for not going to a PD kernel - because one may still loose information - 'distance kernel learning,' - this is a bit a mismatch - you either have a kernel or a distance and you may define a distance based on a kernel ... - 'This type of kernel, however, results in a diagonal-dominance problem, where the diagonal entries of the kernel Gram matrix are orders of magnitude larger than the off-diagonal entries,' -- well yes, but could this not be solved by a renormalization of the kernel matrix (assuming that the diagonal elements are at least positive) such that finally all diagonals are 1? - Eq 4 is not a so new idea. Many authors have already plugged distances into an exp to make it a similarity - but this is changing the data representation (this is widely discussed by pekalska). And if d is non-metric the Eq for may not even provide the 'correct' mathematical formulation. That may not be a problem - e.g. by asking - can I get any kind of similar kernel to my given dissimilarities (I think there is old work around this by G. Wahba et al.), e.g. by learning a proxy kernel such that the similarities of the kernel are close to the dissimilarities. One may also more directly deal with the dissimilarities see e.g. work of Yiming Ying about learning with dissimilarities (or so). But if I would like to keep the classical formulation that from an inner product based on K I can define a dissimilarity - and this should be the same like the original one which I used to get K the Eq 4 may not be so nice (--> see double centering in the book of Pekalska) - ', our kernel in Equation (5) is always PD by' - Eq 5 or Eq 4 - later on Eq 5 is never used again - cmp Algo 1 - caption/title - if I understand correct in Alg 1 l(x,y) - refers to a loss function (?) and y to a label - not introduced anywhere - but then your approach is supervised - the classical RFF are unsupervised (and hence more generic) - beside of this I may not have any label information if I go from a dissimilarity to similarities --> if this is a restriction of your method you should reflect this in adding e.g. ' ... in supervised learning' in the title - 'Since most distance measures are computationally demanding, having quadratic complexity, we adapted or implemented C-MEX programs for them; other codes were written in Matlab.' - software implementation details are not relevant here - please remove - please provide the full crossvalidation information including mean/std-dev, aligned with a significance test - 'For our new method D2KE, since we generate random samples from the distribution, we can use as many as needed to achieve performance close to an exact kernel. We report the best number in the range R = [4, 4096] (typically the larger R is, the better the accuracy).' - if so I will hope you tuned your R on the training data and the reported values are from the test data! - please clarify - the other methods may also have meta-parameters e.g. C in the case of classical SVM - which values are used and how are they obtained - classical DTW is not defined for multiple timeseries - what did you use - your result tables have changed compared to the ICML paper (also one method GDK_LDE by Pekalska has vanished) please explain ! In particular also the runtime of RSM has nonlinearly changed between the two papers and the overall results are now a bit more in favour of your method. (Table 1) The dataset 'mnist-str4' has vanished as well. For the image data you also have a strange change - in the ICML paper GDK_LED was best and now - because this method does not show up anymore your approach looks to be the best. As the runtime change so much between the two submissions I am wondering why you still report them. Basically the O-notation is telling that your method is linear (for reasonable large N and not to large R) and that most other have O(N^2). To ensure reproducability of your results I ask you to provide the respective codes e.g. on github (can be done anonymous) - Repeating myself from the last review: there is a lot of work addressing that making a kernel psd may not be good idea - you provide experiments for a small number of data where your kernel is now psd but what is with the other data (where e.g. in Pekalska and followers it was shown that making them psd is bad ... ) - is your approach solving this - or do we end with an approach which is not very performant (in accuracy) for the hard/crucial datasets? - why do you not use some of the datasets provided by Pekalska (simbad EU project - still in the web) (some maybe also be found by Loosli) to have a more realistic comparison - once more the number of datasets used in the evaluation is not particular large - still a few typos 'dissimilairty' --> spell checker
ICLR
Title Skip Connections Eliminate Singularities Abstract Skip connections made the training of very deep networks possible and have become an indispensable component in a variety of neural architectures. A completely satisfactory explanation for their success remains elusive. Here, we present a novel explanation for the benefits of skip connections in training very deep networks. The difficulty of training deep networks is partly due to the singularities caused by the non-identifiability of the model. Several such singularities have been identified in previous works: (i) overlap singularities caused by the permutation symmetry of nodes in a given layer, (ii) elimination singularities corresponding to the elimination, i.e. consistent deactivation, of nodes, (iii) singularities generated by the linear dependence of the nodes. These singularities cause degenerate manifolds in the loss landscape that slow down learning. We argue that skip connections eliminate these singularities by breaking the permutation symmetry of nodes, by reducing the possibility of node elimination and by making the nodes less linearly dependent. Moreover, for typical initializations, skip connections move the network away from the “ghosts” of these singularities and sculpt the landscape around them to alleviate the learning slow-down. These hypotheses are supported by evidence from simplified models, as well as from experiments with deep networks trained on real-world datasets. 1 INTRODUCTION Skip connections are extra connections between nodes in different layers of a neural network that skip one or more layers of nonlinear processing. The introduction of skip (or residual) connections has substantially improved the training of very deep neural networks (He et al., 2015; 2016; Huang et al., 2016; Srivastava et al., 2015). Despite informal intuitions put forward to motivate skip connections, a clear understanding of how these connections improve training has been lacking. Such understanding is invaluable both in its own right and for the possibilities it might offer for further improvements in training very deep neural networks. In this paper, we attempt to shed light on this question. We argue that skip connections improve the training of deep networks partly by eliminating the singularities inherent in the loss landscapes of deep networks. These singularities are caused by the non-identifiability of subsets of parameters when nodes in the network either get eliminated (elimination singularities), collapse into each other (overlap singularities) (Wei et al., 2008), or become linearly dependent (linear dependence singularities). Saad & Solla (1995); Amari et al. (2006); Wei et al. (2008) identified the elimination and overlap singularities and showed that they significantly slow down learning in shallow networks; Saxe et al. (2013) showed that linear dependence between nodes arises generically in randomly initialized deep linear networks and becomes more severe with depth. We show that skip connections eliminate these singularities and provide evidence suggesting that they improve training partly by ameliorating the learning slow-down caused by the singularities. 2 RESULTS 2.1 SINGULARITIES IN FULLY-CONNECTED LAYERS AND HOW SKIP CONNECTIONS BREAK THEM In this work, we focus on three types of singularity that arise in fully-connected layers: elimination and overlap singularities (Amari et al., 2006; Wei et al., 2008), and linear dependence singularities Ja =Jb Ja =Jb =αJ +βJa bJ=0 J=0 identifiable? yesno ... ww =0 w=0 J J ... bw wa wb ... ... w wa ... Elimination singularities a b c Overlap singularity ... skip Linear dependence singularity Ja Jc wcwa wb Ja Jc JbJb wcwa wb Figure 1: Singularities in a fully connected layer and how skip connections break them. (a) In elimination singularities, zero incoming weights, J = 0, eliminate units and make outgoing weights, w, non-identifiable (red). Skip connections (blue) ensure units are active at least sometimes, so the outgoing weights are identifiable (green). The reverse holds for zero outgoing weights, w = 0: skip connections recover identifiability for J . (b) In overlap singularities, overlapping incoming weights, Ja = Jb, make outgoing weights non-identifiable; skip connections again break the degeneracy. (c) In linear dependence singularities, a subset of the hidden units become linearly dependent, making their outgoing weights non-identifiable; skip connections break the linear dependence. (Saxe et al., 2013). The linear dependence singularities can arise exactly only in linear networks, whereas the elimination and overlap singularities can arise in non-linear networks as well. These singularities are all related to the non-identifiability of the model. The Hessian of the loss function becomes singular at these singularities (Supplementary Note 1), hence they are sometimes also called degenerate or higher-order saddles (Anandkumar & Ge, 2016). Elimination singularities arise when a hidden unit is effectively killed, e.g. when its incoming (or outgoing) weights become zero (Figure 1a). This makes the outgoing (or incoming) connections of the unit non-identifiable. Overlap singularities are caused by the permutation symmetry of the hidden units at a given layer and they arise when two units become identical, e.g. when their incoming weights become identical (Figure 1b). In this case, the outgoing connections of the units are no longer identifiable individually (only their sum is identifiable). Linear dependence singularities arise when a subset of the hidden units in a layer become linearly dependent (Figure 1c). Again, the outgoing connections of these units are no longer identifiable individually (only a linear combination of them is identifiable). How do skip connections eliminate these singularities? Skip connections between adjacent layers break the elimination singularities by ensuring that the units are active at least for some inputs, even when their adjustable incoming or outgoing connections become zero (Figure 1a; right). They eliminate the overlap singularities by breaking the permutation symmetry of the hidden units at a given layer (Figure 1b; right). Thus, even when the adjustable incoming weights of two units become identical, the units do not collapse into each other, since their distinct skip connections still disambiguate them. They also eliminate the linear dependence singularities by adding linearly independent (in fact, orthogonal in most cases) inputs to the units (Figure 1c; right). 2.2 WHY ARE SINGULARITIES HARMFUL FOR LEARNING? The effect of elimination and overlap singularities on gradient-based learning has been analyzed previously for shallow networks (Amari et al., 2006; Wei et al., 2008). Figure 2a shows the simplified two hidden unit model analyzed in Wei et al. (2008) and its reduction to a two-dimensional system in terms of the overlap and elimination variables, h and z. Both types of singularity cause degenerate manifolds in the loss landscape, represented by the lines h = 0 and z = ±1 in Figure 2b, corresponding to the overlap and elimination singularities respectively. The elimination manifolds divide the overlap manifolds into stable and unstable segments. According to the analysis presented in Wei et al. (2008), these manifolds give rise to two types of plateaus in the learning dynamics: on-singularity plateaus which are caused by the random walk behavior of stochastic gradient descent (SGD) along a stable segment of the overlap manifolds (thick segment on the h = 0 line in Figure 2b) until it escapes the stable segment, and (more relevant in practical cases) near-singularity plateaus which manifest themselves as a general slowing of the dynamics near the overlap manifolds, even when the initial location is not within the basin of attraction of the stable segment. Although this analysis only holds for two hidden units, for higher dimensional cases, it suggests that overlaps between hidden units significantly slow down learning along the overlap directions. These overlap directions become more numerous as the number of hidden units increases, thus reducing the effective dimensionality of the model. We provide empirical evidence for this claim below. As mentioned earlier, linear dependence singularities arise exactly only in linear networks. However, we expect them to hold approximately, and thus have consequences for learning, in the non-linear case as well. Figure 2d-e shows an example in a toy single-layer nonlinear network: learning along a linear dependence manifold, represented by m here, is much slower than learning along other directions, e.g. the norm of the incoming weight vector Jc in the example shown here. Saxe et al. (2013) demonstrated that this linear dependence problem arises generically, and becomes worse with depth, in randomly initialized deep linear networks. Because learning is significantly slowed down along linear dependence directions compared to other directions, these singularities effectively reduce the dimensionality of the model, similarly to the overlap manifolds. 2.3 PLAIN NETWORKS ARE MORE DEGENERATE THAN NETWORKS WITH SKIP CONNECTIONS To investigate the relationship between degeneracy, training difficulty and skip connections in deep networks, we conducted several experiments with deep fully-connected networks. We compared three different architectures. (i) The plain architecture is a fully-connected feedforward network with no skip connections, described by the equation: xl+1 = f(Wlxl + bl+1) l = 0, . . . , L− 1 where f is the ReLU nonlinearity and x0 denotes the input layer. (ii) The residual architecture introduces identity skip connections between adjacent layers (note that we do not allow skip connections from the input layer): x1 = f(W0x0 + b1), xl+1 = f(Wlxl + bl+1) + xl l = 1, . . . , L− 1 (iii) The hyper-residual architecture adds skip connections between each layer and all layers above it: x1 = f(W0x0+b1), x2 = f(W1x1+b2)+x1, xl+1 = f(Wlxl+bl+1)+xl+ 1 l − 1 l−1∑ k=1 Qkxk l = 2, . . . , L−1 The skip connectivity from the immediately preceding layer is always the identity matrix, whereas the remaining skip connections Qk are fixed, but allowed to be different from the identity (see Supplementary Note 2 for further details). This architecture is inspired by the DenseNet architecture (Huang et al., 2016). In both architectures, each layer projects skip connections to layers above it. However, in the DenseNet architecture, the skip connectivity matrices are learned, whereas in the hyper-residual architecture considered here, they are fixed. In the experiments of this subsection, the networks all had L = 20 hidden layers (followed by a softmax layer at the top) and n = 128 hidden units (ReLU) in each hidden layer. Hence, the networks had the same total number of parameters. The biases were initialized to 0 and the weights were initialized with the Glorot normal initialization scheme (Glorot & Bengio, 2010). The networks were trained on the CIFAR-100 dataset (with coarse labels) using the Adam optimizer (Kingma & Ba, 2014) with learning rate 0.0005 and a batch size of 500. Because we are mainly interested in understanding how singularities, and their removal, change the shape of the loss landscape and consequently affect the optimization difficulty, we primarily monitor the training accuracy rather than test accuracy in the results reported below. To measure degeneracy, we estimated the eigenvalue density of the Hessian during training for the three different network architectures. The probability of small eigenvalues in the eigenvalue density reflects the dimensionality of the degenerate parameter space. To estimate this eigenvalue density in our ∼ 1M-dimensional parameter spaces, we first estimated the first four moments of the spectral density using the method of Skilling (Skilling, 1989) and fit the estimated moments with a flexible mixture density model (see Supplementary Note 3 for details) consisting of a narrow Gaussian component to capture the bulk of the spectral density, and a skew Gaussian density to capture the tails (see Figure 3d for example fits). From the fitted mixture density, we estimated the fraction of degenerate eigenvalues and the fraction of negative eigenvalues during training. We validated our main results, as well as our mixture model for the spectral density, with smaller networks with ∼ 14K parameters where we could calculate all eigenvalues of the Hessian numerically (Supplementary Note 4). For these smaller networks, the mixture model slightly underestimated the fraction of degenerate eigenvalues and overestimated the fraction of negative eigenvalues; however, there was a highly significant linear relationship between the actual and estimated fractions. Figure 3b shows the evolution of the fraction of degenerate eigenvalues during training. A large value at a particular point during optimization indicates a more degenerate model. By this measure, the hyper-residual architecture is the least degenerate and the plain architecture is the most degenerate. We observe the opposite pattern for the fraction of negative eigenvalues (Figure 3c). The differences between the architectures are more prominent early on in the training and there is an indication of a crossover later during training, with less degenerate models early on becoming slightly more degenerate later on as the training performance starts to saturate (Figure 3b). Importantly, the hyper-residual architecture has the highest training speed and the plain architecture has the lowest training speed (Figure 3a), consistent with our hypothesis that the degeneracy of a model increases the training difficulty and skip connections reduce the degeneracy. 2.4 TRAINING ACCURACY IS RELATED TO DISTANCE FROM DEGENERATE MANIFOLDS To establish a more direct relationship between the elimination, overlap and linear dependence singularities discussed earlier on the one hand, and model degeneracy and training difficulty on the other, we exploited the natural variability in training the same model caused by the stochasticity of stochastic gradient descent (SGD) and random initialization. Specifically, we trained 100 plain networks (30 hidden layers, 128 neurons per layer) on CIFAR-100 using different random initializations and random mini-batch selection. Training performance varied widely across runs. We compared the best 10 and the worst 10 runs (measured by mean accuracy over 100 training epochs, Figure 4a). The worst networks were more degenerate (Figure 4b); they were significantly closer to elimination singularities, as measured by the average l2-norm of the incoming weights of their hidden units (Figure 4c); they were significantly closer to overlap singularities (Figure 4d), as measured by the mean correlation between the incoming weights of their hidden units; and their hidden units were significantly more linearly dependent (Figure 4e), as measured by the mean variance explained by the top three eigenmodes of the covariance matrices of the hidden units in the same layer. 2.5 BENEFITS OF SKIP CONNECTIONS AREN’T EXPLAINED BY GOOD INITIALIZATION ALONE To investigate if the benefits of skip connections can be explained in terms of favorable initialization of the parameters, we introduced a malicious initialization scheme for the residual network by subtracting the identity matrix from the initial weight matrices, Wl. If the benefits of skip connections can be explained primarily by favorable initialization, this malicious initialization would be expected to cancel the effects of skip connections at initialization and hence significantly deteriorate the performance. However, the malicious initialization only had a small adverse effect on the performance of the residual network (Figure 5; ResMalInit), suggesting that the benefits of skip connections cannot be explained by favorable initialization alone. This result reveals a fundamental weakness in previous explanations of the benefits of skip connections based purely on linear models (Hardt & Ma, 2016; Li et al., 2016). In Supplementary Note 5 we show that skip connections do not eliminate the singularities in deep linear networks, but only shift the landscape so that typical initializations are farther from the singularities. Thus, in linear networks, any benefits of skip connections are due entirely to better initialization. In contrast, skip connections genuinely eliminate the singularities in nonlinear networks (Supplementary Note 1). The fact that malicious initialization of the residual network does reduce its performance suggests that “ghosts” of these singularities still exist in the loss landscape of nonlinear networks, but the performance reduction is only slight, suggesting that skip connections alter the landscape around these ghosts to alleviate the learning slow-down that would otherwise take place near them. 2.6 ALTERNATIVE WAYS OF ELIMINATING THE SINGULARITIES If the success of skip connections can be attributed, at least partly, to eliminating singularities, then alternative ways of eliminating them should also improve training. We tested this hypothesis by introducing a particularly simple way of eliminating singularities: for each layer we drew random target biases from a Gaussian distribution, N (µ, σ), and put an l2-norm penalty on learned biases deviating from those targets. This breaks the permutation symmetry between units and eliminates the overlap singularities. In addition, positive µ values decrease the average threshold of the units and make the elimination of units less likely (but not impossible), hence reducing the elimination singularities. Decreased thresholds can also increase the dimensionality of the responses in a given layer by reducing the fraction of times different units are identically zero, thereby making them less linearly dependent. Note that setting µ = 0 and σ = 0 corresponds to the standard l2-norm regularization of the biases, which does not eliminate any of the overlap or elimination singularities. Hence, we expect the performance to be worse in this case than in cases with properly eliminated singularities. On the other hand, although in general, larger values of µ and σ correspond to greater elimination of singularities, the network also has to perform well in the classification task and very large µ, σ values might be inconsistent with the latter requirement. Therefore, we expect the performance to be optimal for intermediate values of µ and σ. In the experiments reported below, we optimized the hyperparameters µ, σ, and λ, i.e. the mean and the standard deviation of the target bias distribution and the strength of the bias regularization term, through random search (Bergstra & Bengio, 2012). We trained 30-layer fully-connected feedforward networks on CIFAR-10 and CIFAR-100 datasets. Figure 5a-b shows the training accuracy of different models on the two datasets. For both datasets, among the models shown in Figure 5, the residual network performs the best and the plain network the worst. Our simple singularity elimination through bias regularization scheme (BiasReg, cyan) significantly improves performance over the plain network. Importantly, the standard l2-norm regularization on the biases (BiasL2Reg (µ = 0, σ = 0), magenta) does not improve performance over the plain network. These results are consistent with the singularity elimination hypothesis. There is still a significant performance gap between our BiasReg network and the residual network despite the fact that both break degeneracies. This can be partly attributed to the fact that the residual network breaks the degeneracies more effectively than the BiasReg network (Figure 5c). Secondly, even in models that completely eliminate the singularities, the learning speed would still depend on the behavior of the gradient norms, and the residual network fares better than the BiasReg network in this respect as well. At the beginning of training, the gradient norms with respect to the layer activities do not diminish in earlier layers of the residual network (Figure 6a, Epoch 0), demonstrating that it effectively solves the vanishing gradients problem (Hochreiter, 1991; Bengio et al., 1994). On the other hand, both in the plain network and in the BiasReg network, the gradient norms decay quickly as one descends from the top of the network. Moreover, as training progresses (Figure 6a, Epochs 1 and 2), the gradient norms are larger for the residual network than for the plain or the BiasReg network. Even for the maliciously initialized residual network, gradients do not decay quickly at the beginning of training and the gradient norms behave similarly to those of the residual network during training (Figure 6a; ResMalInit), suggesting that skip connections boost the gradient norms near the ghosts of the singularities and reduce the learning slow-down that would otherwise take place near them. Adding a single batch normalization layer (Ioffe & Szegedy, 2015) in the middle of the BiasReg network alleviates the vanishing gradients problem for this network and brings its performance closer to that of the residual network (Figure 6a-b; BiasReg+BN). 2.7 NON-IDENTITY SKIP CONNECTIONS If the singularity elimination hypothesis is correct, there should be nothing special about identity skip connections. Skip connections other than identity should lead to training improvements if they eliminate singularities. For the permutation symmetry breaking of the hidden units, ideally the skip connection vector for each unit should disambiguate that unit maximally from all other units in that layer. This is because as shown by the analysis in Wei et al. (2008) (Figure 2), even partial overlaps between hidden units significantly slow down learning (near-singularity plateaus). Mathematically, the maximal disambiguation requirement corresponds to an orthogonality condition on the skip connectivity matrix (any full-rank matrix breaks the permutation symmetry, but only orthogonal matrices maximally disambiguate the units). Adding orthogonal vectors to different hidden units is also useful for breaking potential (exact or approximate) linear dependencies between them. We therefore tested random dense orthogonal matrices as skip connectivity matrices. Random dense orthogonal matrices performed slightly better than identity skip connections in both CIFAR-10 and CIFAR-100 datasets (Figure 7a, black vs. blue). This is because, even with skip connections, units can be deactivated for some inputs because of the ReLU nonlinearity (recall that we do not allow skip connections from the input layer). When this happens to a single unit at layer l, that unit is effectively eliminated for that subset of inputs, hence eliminating the skip connection to the corresponding unit at layer l+1, if the skip connectivity is the identity. This causes a potential elimination singularity for that particular unit. With dense skip connections, however, this possibility is reduced, since all units in the previous layer are used. Moreover, when two distinct units at layer l are deactivated together, the identity skips cannot disambiguate the corresponding units at the next layer, causing a potential overlap singularity. On the other hand, with dense orthogonal skips, because all units at layer l are used, even if some of them are deactivated, the units at layer l + 1 can still be disambiguated with the remaining active units. Figure 7b confirms for the CIFAR-100 dataset that throughout most of the training, the hidden units of the network with dense orthogonal skip connections have a lower probability of zero responses than those of the network with identity skip connections. Next, we gradually decreased the degree of “orthogonality” of the skip connectivity matrix to see how the orthogonality of the matrix affects performance. Starting from a random dense orthogonal matrix, we first divided the matrix into two halves and copied the first half to the second half. Starting from n orthonormal vectors, this reduces the number of orthonormal vectors to n/2. We continued on like this until the columns of the matrix were repeats of a single unit vector. We predict that as the number of orthonormal vectors in the skip connectivity matrix is decreased, the performance should deteriorate, because both the permutation symmetry-breaking capacity and the linear-dependencebreaking capacity of the skip connectivity matrix are reduced. Figure 7 shows the results for n = 128 hidden units. Darker colors correspond to “more orthogonal” matrices (e.g. “128” means all 128 skip vectors are orthonormal to each other, “1” means all 128 vectors are identical). The blue line is the identity skip connectivity. More orthogonal skip connectivity matrices yield better performance, consistent with our hypothesis. The less orthogonal skip matrices also suffer from the vanishing gradients problem. So, their failure could be partly attributed to the vanishing gradients problem. To control for this effect, we also designed skip connectivity matrices with eigenvalues on the unit circle (hence with eigenvalue spectra equivalent to an orthogonal matrix), but with varying degrees of orthogonality (see Supplementary Note 6 for details). More specifically, the columns (or rows) of an orthogonal matrix are orthonormal to each other, hence the covariance matrix of these vectors is the identity matrix. We designed matrices where this covariance matrix was allowed to have non-zero off-diagonal values, reflecting the fact that the vectors are not orthogonal any more. By controlling the magnitude of the correlations between the vectors, we manipulated the degree of orthogonality of the vectors. We achieved this by setting the eigenvalue spectrum of the covariance matrix to be given by λi = exp(−τ(i−1)) where λi denotes the i-th eigenvalue of the covariance matrix and τ is the parameter that controls the degree of orthogonality: τ = 0 corresponds to the identity covariance matrix, hence to an orthonormal set of vectors, whereas larger values of τ correspond to gradually more correlated vectors. This orthogonality manipulation was done while fixing the eigenvalue spectrum of the skip connectivity matrix to be on the unit circle. Hence, the effects of this manipulation cannot be attributed to any change in the eigenvalue spectrum, but only to the degree of orthogonality of the skip vectors. The results of this experiment are shown in Figure 8. More orthogonal skip connectivity matrices still perform better than less orthogonal ones (Figure 8c-d), even when their eigenvalue spectrum is fixed and the vanishing gradients problem does not arise (Figure 8b), suggesting that the results of the earlier experiment (Figure 7) cannot be explained solely by the vanishing gradients problem. 3 DISCUSSION In this paper, we proposed a novel explanation for the benefits of skip connections in terms of the elimination of singularities. Our results suggest that elimination of singularities contributes at least partly to the success of skip connections. However, we emphasize that singularity elimination is not the only factor explaining the benefits of skip connections. Even in completely non-degenerate models, other independent factors such as the behavior of gradient norms would affect training performance. Indeed, we presented evidence suggesting that skip connections are also quite effective at dealing with the problem of vanishing gradients and not every form of singularity elimination can be expected to be equally good at dealing with such additional problems that beset the training of deep networks. Alternative explanations: Several of our experiments rule out vanishing gradients as the sole explanation for training difficulties in deep networks and strongly suggest an independent role for the singularities arising from the non-identifiability of the model. (i) In Figure 4, all nets have the exact same plain architecture and similarly vanishing gradients at the beginning of training, yet they have diverging performances correlated with measures of distance from singular manifolds. (ii) Vanishing gradients cannot explain the difference between identity skips and dense orthogonal skips in Figure 7, because both eliminate vanishing gradients, yet dense orthogonal skips perform better. (iii) In Figure 8, spectrum-equalized non-orthogonal skips often have larger gradient norms, yet worse performance than orthogonal skips. (iv) Vanishing gradients cannot even explain the BiasReg results in Figure 5. The BiasReg and the plain net have almost identical (and vanishing) gradients early on in training (Figure 6a), yet the former has better performance as predicted by the symmetry-breaking hypothesis. (v) Similar results hold for two-layer shallow networks where the problem of vanishing gradients does not arise (Supplementary Note 7). In particular, shallow residual nets are less degenerate and have better accuracy than shallow plain nets; moreover, gradient norms and accuracy are strongly correlated with distance from the overlap manifolds in these shallow nets. Our malicious initialization experiment with residual nets (Figure 5) suggests that the benefits of skip connections cannot be explained solely in terms of well-conditioning or improved initialization either. This result reveals a fundamental weakness in purely linear explanations of the benefits of skip connections (Hardt & Ma, 2016; Li et al., 2016). Unlike in nonlinear nets, improved initialization entirely explains the benefits of skip connections in linear nets (Supplementary Note 5). A recent paper (Balduzzi et al., 2017) suggested that the loss of spatial structure in the covariance of the gradients, a phenomenon called “shattered gradients”, could be partly responsible for training difficulties in deep nonlinear networks. They argued that skip connections alleviate this problem by essentially making the model “more linear”. It is easy to see that the shattered gradients problem is distinct from both the vanishing/exploding gradients problem and the degeneracy problems considered in this paper, since shattered gradients arise only in sufficiently non-linear deep networks (linear networks do not shatter gradients), whereas vanishing/exploding gradients, as well as the degeneracies considered here, arise in linear networks too. The relative contribution of each of these distinct problems to training difficulties in deep networks remains to be determined. Symmetry-breaking in other architectures: We only reported results from experiments with fullyconnected networks, but we note that limited receptive field sizes and weight sharing between units in a single feature channel in convolutional neural networks also reduce the permutation symmetry in a given layer. The symmetry is not entirely eliminated since although individual units do not have permutation symmetry in this case, feature channels do, but they are far fewer in number than the number of individual units. Similarly, a recent extension of the residual architecture called ResNeXt (Xie et al., 2016) uses parallel, segregated processing streams inside the “bottleneck” blocks, which can again be seen as a way of reducing the permutation symmetry inside the block. Our method of singularity reduction through bias regularization (BiasReg; Figure 5) can be thought of as indirectly putting a prior over the unit activities. More complicated joint priors over hidden unit responses that favor decorrelated (Cogswell et al., 2015) or clustered (Liao et al., 2016) responses have been proposed before. Although the primary motivation for these regularization schemes was to improve the generalizability or interpretability of the learned representations, they can potentially be understood from a singularity elimination perspective as well. For example, a prior that favors decorrelated responses can facilitate the breaking of permutation symmetries and linear dependencies between hidden units. Our results lead to an apparent paradox: over-parametrization and redundancy in large neural network models have been argued to make optimization easier. Yet, our results seem to suggest the opposite. However, there is no contradiction here. Any apparent contradiction is due to potential ambiguities in the meanings of the terms “over-parametrization” and “redundancy”. The intuition behind the benefits of over-parametrization for optimization is an increase in the effective capacity of the model: over-parametrization in this sense leads to a large number of approximately equally good ways of fitting the training data. On the other hand, the degeneracies considered in this paper reduce the effective capacity of the model, leading to optimization difficulties. Our results suggest that it could be useful for neural network researchers to pay closer attention to the degeneracies inherent in their models. For better optimization, as a general design principle, we recommend reducing such degeneracies in a model as much as possible. Once the training performance starts to saturate, however, degeneracies may help the model achieve a better generalization performance. Exploring this trade-off between the harmful and beneficial effects of degeneracies is an interesting direction for future work. Acknowledgments: AEO and XP were supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government. SUPPLEMENTARY MATERIALS SUPPLEMENTARY NOTE 1: SINGULARITY OF THE HESSIAN IN NON-LINEAR MULTILAYER NETWORKS Because the cost function can be expressed as a sum over training examples, it is enough to consider the cost for a single example: E = 12 ||y − xL|| 2 ≡ 12e >e, where xl are defined recursively as xl = f(Wl−1xl−1) for l = 1, . . . , L. We denote the inputs to units at layer l by the vector hl: hl = Wl−1xl−1. We ignore the biases for simplicity. The derivative of the cost function with respect to a single weight Wl,ij between layers l and l + 1 is given by: ∂E ∂Wl,ij = − 0 ... f ′(hl+1,i)xl,j ... 0 > W>l+1diag(f ′ l+2)W > l+2diag(f ′ l+3) · · ·W>L−1diag(f ′L)e (1) Now, consider a different connection between the same output unit i at layer l + 1 and a different input unit j′ at layer l. The crucial thing to note is that if the units j and j′ have the same set of incoming weights, then the derivative of the cost function with respect to Wl,ij becomes identical to its derivative with respect to Wl,ij′ : ∂E/∂Wl,ij = ∂E/∂Wl,ij′ . This is because in this condition xl,j′ = xl,j for all possible inputs and all the remaining terms in Equation 1 are independent of the input index j. Thus, the columns (or rows) corresponding to the connections Wl,ij and Wl,ij′ in the Hessian become identical, making the Hessian degenerate. This is a re-statement of the simple observation that when the units j and j′ have the same set of incoming weights, the parameters Wl,ij and Wl,ij′ become non-identifiable (only their sum is identifiable). Thus, this corresponds to an overlap singularity. A similar argument shows that when a set of units at layer l, say units indexed j, j′, j′′ become linearly dependent, the columns of the Hessian corresponding to the weights Wl,ij , Wl,ij′ and Wl,ij′′ become linearly dependent as well, thereby making the Hessian singular. Again, this is just a re-statement of the fact that these weights are no longer individually identifiable in this case (only a linear combination of them is identifiable). This corresponds to a linear dependence singularity. In non-linear networks, except in certain degenerate cases where the units saturate together, they may never be exactly linearly dependent, but they can be approximately linearly dependent, which makes the Hessian close to singular. Moreover, it is easy to see from Equation 1 that, when the presynaptic unit xl,j is always zero, i.e. when that unit is effectively killed, the column (or row) of the Hessian corresponding to the parameter Wl,ij becomes the zero vector for any i, and thus the Hessian becomes singular. This is a re-statement of the simple observation that when the unit xl,j is always zero, its outgoing connections, Wl,ij , are no longer identifiable. This corresponds to an elimination singularity. In the residual case, the only thing that changes in Equation 1 is that the factors W>k diag(f ′ k+1) on the right-hand side become W>k diag(f ′ k+1)+ I where I is an identity matrix of the appropriate size. The overlap singularities are eliminated, because xl,j′ and xl,j cannot be the same for all possible inputs in the residual case (even when the adjustable incoming weights of these units are identical). Similarly, elimination singularities are also eliminated, because xl,j cannot be identically zero for all possible inputs (even when the adjustable incoming weights of this unit are all zero), assuming that the corresponding unit at the previous layer xl−1,j is not always zero, which, in turn, is guaranteed with an identity skip connection if xl−2,j is not always zero etc., all the way down to the first hidden layer. Any linear dependence between xl,j , xl,j′ and xl,j′′ is also eliminated by adding linearly independent inputs to them, assuming again that the corresponding units in the previous layer are linearly independent. SUPPLEMENTARY NOTE 2: SIMULATION DETAILS In Figure 3, for the skip connections between non-adjacent layers in the hyper-residual networks, i.e. Qk, we used matrices of the type labeled “32” in Figure 7, i.e. matrices consisting of four copies of a set of 32 orthonormal vectors. We found that these matrices performed slightly better than orthogonal matrices. We augmented the training data in both CIFAR-10 and CIFAR-100 by adding reflected versions of each training image, i.e. their mirror images. This yields a total of 100000 training images for both datasets. The test data were not augmented, consisting of 10000 images in both cases. We used the standard splits of the data into training and test sets. For the BiasReg network of Figures 5-6, random hyperparameter search returned the following values for the target bias distributions: µ = 0.51, σ = 0.96 for CIFAR-10 and µ = 0.91, σ = 0.03 for CIFAR-100. The toy model shown in Figure 2b-c consists of the simulation of Equations 3.7 and 3.9 in Wei et al. (2008). The toy model shown in Figure 2e is the simulation of learning dynamics in a network with 3 input, 3 hidden and 3 output units, parametrized in terms of the norms and unit-vector directions of Ja−Jb−Jc, Ja+Jb−Jc, Jc, and the output weights. A teacher model with random parameters is first chosen and a large set of “training data” is generated from the teacher model. Then the gradient flow fields with respect to the two parameters m = ||Ja + Jb − Jc|| and ||Jc|| are plotted with the assumption that the remaining parameters are already at their optima (a similar assumption was made in the analysis of Wei et al. (2008)). We empirically confirmed that the flow field is generic. SUPPLEMENTARY NOTE 3: ESTIMATING THE EIGENVALUE SPECTRAL DENSITY OF THE HESSIAN IN DEEP NETWORKS We use Skilling’s moment matching method (Skilling, 1989) to estimate the eigenvalue spectra of the Hessian. We first estimate the first few non-central moments of the density by computing mk = 1 N r >Hkr where r is a random vector drawn from the standard multivariate Gaussian with zero mean and identity covariance, H is the Hessian and N is the dimensionality of the parameter space. Because the standard multivariate Gaussian is rotationally symmetric and the Hessian is a symmetric matrix, it is easy to show that mk gives an unbiased estimate of the k-th moment of the spectral density: mk = 1 N r>Hkr = 1 N N∑ i=1 r̃2iλ k i → ∫ p(λ)λkdλ as N →∞ (2) where λi are the eigenvalues of the Hessian, and p(λ) is the spectral density of the Hessian as N → ∞. In Equation 2, we make use of the fact that r̃2i are random variables with expected value 1. Despite appearances, the products in mk do not require the computation of the Hessian explicitly and can instead be computed efficiently as follows: v0 = r, vk = Hvk−1 k = 1, . . . ,K (3) where the Hessian times vector computation can be performed without computing the Hessian explicitly through Pearlmutter’s R-operator (Pearlmutter, 1994). In terms of the vectors vk, the estimates of the moments are given by the following: m2k = 1 N v>k vk, m2k+1 = 1 N v>k vk+1 (4) For the results shown in Figure 3, we use 20-layer fully-connected feedforward networks and the number of parameters is N = 709652. For the remaining simulations, we use 30-layer fullyconnected networks and the number of parameters is N = 874772. We estimate the first four moments of the Hessian and fit the estimated moments with a parametric density model. The parametric density model we use is a mixture of a narrow Gaussian distribution (to capture the bulk of the density) and a skew-normal distribution (to capture the tails): q(λ) = wSN (λ; ξ, ω, α) + (1− w)N (λ; 0, σ = 0.001) (5) with 4 parameters in total: the mixture weight w, and the location ξ, scale ω and shape α parameters of the skew-normal distribution. We fix the parameters of the Gaussian component to µ = 0 and σ = 0.001. Since the densities are heavy-tailed, the moments are dominated by the tail behavior of the model, hence the fits are not very sensitive to the precise choice of the parameters of the Gaussian component. The moments of our model can be computed in closed-form. We had difficulty fitting the parameters of the model with gradient-based methods, hence we used a simple grid search method instead. The ranges searched over for each parameter was as follows. w: logarithmically spaced between 10−9 and 10−3; α: linearly spaced between−50 and 50; ξ: linearly spaced between −10 and 10; ω: logarithmically spaced between 10−1 and 103. 100 parameters were evaluated along each parameter dimension for a total of 108 parameter configurations evaluated. The estimated moments ranged over several orders of magnitude. To make sure that the optimization gave roughly equal weight to fitting each moment, we minimized a normalized objective function: L(w,α, ξ, ω) = 4∑ k=1 |m̂k(w,α, ξ, ω)−mk| |mk| (6) where m̂k(w,α, ξ, ω) is the model-derived estimate of the k-th moment. SUPPLEMENTARY NOTE 4: VALIDATION OF THE RESULTS WITH SMALLER NETWORKS Here, we validate our main results for smaller, numerically tractable networks. The networks in this section are 10-layer fully-connected feedforward networks. The networks are trained on CIFAR100. The input dimensionality is reduced from 3072 to 128 through PCA. In what follows, we calculate the fraction of degenerate eigenvalues by counting the number of eigenvalues inside a small window of size 0.2 around 0, and the fraction of negative eigenvalues by the number of eigenvalues to the left of this window. We first compare residual networks with plain networks (Figure S1). The networks here have 16 hidden units in each layer yielding a total of 4852 parameters. This is small enough that we can calculate all eigenvalues of the Hessian numerically. We observe that residual networks have better training and test performance (Figure S1a-b); they are less degenerate (Figure S1d) and have more negative eigenvalues than plain networks (Figure S1c). These results are consistent with the results reported in Figure 3 for deeper and larger networks. Next, we validate the results reported in Figure 4 by running 400 independent plain networks and comparing the best-performing 40 with the worst-performing 40 among them (Figure S2). Again, the networks here have 16 hidden units in each layer with a total of 4852 parameters. We observe that the best networks are less degenerate (Figure S2d) and have more negative eigenvalues than the worst networks (Figure S2c). Moreover, the hidden units of the best networks have less overlap (Figure S2f), and, at least initially during training, have slightly larger weight norms than the worstperforming networks (Figure S2e). Again, these results are all consistent with those reported in Figure 4 for deeper and larger networks. Finally, using numerically tractable plain networks, we also tested whether we could reliably estimate the fractions of degenerate and negative eigenvalues with our mixture model. Just as we do for the larger networks, we first fit the mixture model to the first four moments of the spectral density estimated with the method of Skilling (1989). We then estimate the fraction of degenerate and negative eigenvalues from the fitted mixture model and compare these estimates with those obtained from the numerically calculated actual eigenvalues. Because for the larger networks, the networks were found to be highly degenerate, we restrict the analysis here to conditions where the fraction of degenerate eigenvalues was at least 99.8%. We used 10-layer plain networks with 32 hidden 0 150 300 Epoch 0 40 T ra in in g  ac cu ra cy  ( % ) a Plain Residual 0 150 300 Epoch 0 40 T es t a cc ur ac y  (% ) b 0 150 300 Epoch 0 6 N eg at iv e  ei gs . ( % ) c 0 150 300 Epoch 75 100 D eg en er at e  ei gs . ( % ) d Figure S1: Validation of the results with 10-layer plain and residual networks trained on CIFAR-100. (a-b) Training and test accuracy. (c-d) Fraction of negative and degenerate eigenvalues throughout training. The results are averages over 4 independent runs ±1 standard errors. 0 15 30 0 35a Training accuracy (%) Best 40 Worst 40 0 15 30 0 35b Test accuracy (%) 0 15 30 0 5c Negative eigs. (%) 0 15 30 75 100d Degenerate eigs. (%) 0 15 30 Epoch 1 1.2e Mean norm 0 15 30 Epoch 0.0075 0.005 0.0025 0 0.0025 f Mean overlap Figure S2: Validation of the results with 400 10-layer plain networks with 16 hidden units in each layer (4852 parameters total) trained on CIFAR-100. We compare the best 40 networks with the worst 40 networks, as in Figure 4. (a-b) Training and test accuracy. (c-d) Fraction of negative and degenerate eigenvalues throughout training. Better performing networks are less degenerate and have more negative eigenvalues. (e) Mean norms of the incoming weight vectors of the hidden units. (f) Mean overlaps of the hidden units as measured by the mean correlation between their incoming weight vectors. The results are averages over 40 best or worst runs ±1 standard errors. 99.75 99.80 99.85 99.90 99.95 100 Actual 99.75 99.80 99.85 99.90 99.95 100 E st im at ed Degenerate eigs. (%) 0 0.01 0.02 0.03 0.04 0.05 Actual 0 0.01 0.02 0.03 0.04 0.05 E st im at ed Negative eigs. (%) Figure S3: For 10-layer plain networks with 32 hidden units in each layer (14292 parameters total), estimates obtained from the mixture model slightly underestimate the fraction of degenerate eigenvalues, and overestimate the fraction of negative eigenvalues; however, there is a highly significant linear relationship between the actual values and the estimates. (a) Actual vs. estimated fraction of degenerate eigenvalues. (b) Actual vs. estimated fraction of negative eigenvalues for the same networks. Dashed line shows the identity line. Dots and errorbars represent means and standard errors of estimates in different bins; the solid lines and the shaded regions represent the linear regression fits and the 95% confidence intervals. units in each layer (with a total of 14292 parameters) for this analysis. We observe that, at least for these small networks, the mixture model usually underestimates the fraction of degenerate eigenvalues and overestimates the fraction of negative eigenvalues. However, there is a highly significant positive correlation between the actual and estimated fractions (Figure S3). SUPPLEMENTARY NOTE 5: DYNAMICS OF LEARNING IN LINEAR NETWORKS WITH SKIP CONNECTIONS To get a better analytic understanding of the effects of skip connections on the learning dynamics, we turn to linear networks. In an L-layer linear plain network, the input-output mapping is given by (again ignoring the biases for simplicity): xL = WL−1WL−2 . . .W1x1 (7) where x1 and xL are the input and output vectors, respectively. In linear residual networks with identity skip connections between adjacent layers, the input-output mapping becomes: xL = (WL−1 + I)(WL−2 + I) . . . (W1 + I)x1 (8) Finally, in hyper-residual linear networks where all skip connection matrices are assumed to be the identity, the input-output mapping is given by: xL = ( WL−1 + (L− 1)I )( WL−2 + (L− 2)I ) . . . ( W1 + I ) x1 (9) In the derivations to follow, we do not have to assume that the connectivity matrices are square matrices. If they are rectangular matrices, the identity matrix I should be interpreted as a rectangular identity matrix of the appropriate size. This corresponds to zero-padding the layers when they are not the same size, as is usually done in practice. Three-layer networks: Dynamics of learning in plain linear networks with no skip connections was analyzed in Saxe et al. (2013). For a three-layer network (L = 3), the learning dynamics can be expressed by the following differential equations (Saxe et al., 2013): τ d dt aα = (sα − aα · bα)bα − ∑ γ 6=α (aα · bγ)bγ (10) τ d dt bα = (sα − aα · bα)aα − ∑ γ 6=α (aγ · bα)aγ (11) Here aα and bα are n-dimensional column vectors (where n is the number of hidden units) connecting the hidden layer to the α-th input and output modes, respectively, of the input-output correlation matrix and sα is the corresponding singular value (see Saxe et al. (2013) for further details). The first term on the right-hand side of Equations 10-11 facilitates cooperation between aα and bα corresponding to the same input-output mode α, while the second term encourages competition between vectors corresponding to different modes. In the simplest scenario where there are only two input and output modes, the learning dynamics of Equations 10, 11 reduces to: d dt a1 = (s1 − a1 · b1)b1 − (a1 · b2)b2 (12) d dt a2 = (s2 − a2 · b2)b2 − (a2 · b1)b1 (13) d dt b1 = (s1 − a1 · b1)a1 − (a1 · b2)a2 (14) d dt b2 = (s2 − a2 · b2)a2 − (a2 · b1)a1 (15) How does adding skip connections between adjacent layers change the learning dynamics? Considering again a three-layer network (L = 3) with only two input and output modes, a straightforward extension of Equations 12-15 shows that the learning dynamics changes as follows: d dt a1 = [ s1 − (a1 + v1) · (b1 + u1) ] (b1 + u1)− [ (a1 + v1) · (b2 + u2) ] (b2 + u2) (16) d dt a2 = [ s2 − (a2 + v2) · (b2 + u2) ] (b2 + u2)− [ (a2 + v2) · (b1 + u1) ] (b1 + u1) (17) d dt b1 = [ s1 − (a1 + v1) · (b1 + u1) ] (a1 + v1)− [ (a1 + v1) · (b2 + u2) ] (a2 + v2) (18) d dt b2 = [ s2 − (a2 + v2) · (b2 + u2) ] (a2 + v2)− [ (a2 + v2) · (b1 + u1) ] (a1 + v1) (19) where u1 and u2 are orthonormal vectors (similarly for v1 and v2). The derivation proceeds essentially identically to the corresponding derivation for plain networks in Saxe et al. (2013). The only differences are: (i) we substitute the plain weight matrices Wl with their residual counterparts Wl + I and (ii) when changing the basis from the canonical basis for the weight matrices W1, W2 to the input and output modes of the input-output correlation matrix, U and V, we note that: W2 + I = UW2 + UU > = U(W2 + U >) (20) W1 + I = W1V > + VV> = (W1 + V)V > (21) where U and V are orthogonal matrices and the vectors aα, bα, uα and vα in Equations 16-19 correspond to the α-th columns of the matrices W1, W > 2 , U and V, respectively. Figure S4 shows, for two different initializations, the evolution of the variables a1 and a2 in plain and residual networks with two input-output modes and two hidden units. When the variables are initialized to small random values, the dynamics in the plain network initially evolves slowly (Figure S4a, blue); whereas it is much faster in the residual network (Figure S4a, red). This effect is attributable to two factors. First, the added orthonormal vectors uα and vα increase the initial velocity of the variables in the residual network. Second, even when we equalize the initial norms of the vectors, aα and aα + vα (and those of the vectors bα and bα + uα) in the plain and the residual networks, respectively, we still observe an advantage for the residual network (Figure S4b), because the cooperative and competitive terms are orthogonal to each other in the residual network (or close to orthogonal, depending on the initialization of aα and bα; see right-hand side of Equations 16-19), whereas in the plain network they are not necessarily orthogonal and hence can cancel each other (Equations 12-15), thus slowing down convergence. Singularity of the Hessian in linear three-layer networks: The dynamics in Equations 10, 11 can be interpreted as gradient descent on the following energy function: E = 1 2τ ∑ α (sα − aα · bα)2 + 1 2τ ∑ α 6=β (aα · bβ)2 (22) 0 200 400 Time -1.5 0 1.5 a 1 a plain residual 0 200 400 Time -1.5 0 1.5 a 2 0 200 400 Time 0 1 2 a 1 b 0 200 400 Time -0.5 0 1 a 2 Figure S4: Evolution of a1 and a2 in linear plain and residual networks (evolution of b1 and b2 proceeds similarly). The weights converge faster in residual networks. Simulation details are as follows: the number of hidden units is 2 (the two solid lines for each color represent the weights associated with the two hidden nodes, e.g. a11 and a 1 2 on the left), the singular values are s1 = 3.0, s2 = 1.5. For the residual network, u1 = v1 = [1/ √ 2, 1/ √ 2]> and u2 = v2 = [1/ √ 2,−1/ √ 2]>. In (a), the weights of both plain and residual networks are initialized to random values drawn from a Gaussian with zero mean and standard deviation of 0.0001. The learning rate was set to 0.1. In (b), the weights of the plain network are initialized as follows: the vectors a1 and a2 are initialized to [1/ √ 2, 1/ √ 2]> and the vectors b1 and b2 are initialized to [1/ √ 2,−1/ √ 2]>; the weights of the residual network are all initialized to zero, thus equalizing the initial norms of the vectors aα and aα + vα (and those of the vectors bα and bα + uα) between the plain and residual networks. The residual network still converges faster than the plain network. In (b), the learning rate was set to 0.01 to make the different convergence rates of the two networks more visible. This energy function is invariant to a (simultaneous) permutation of the elements of the vectors aα and bα for all α. This causes degenerate manifolds in the landscape. Specifically, for the permutation symmetry of hidden units, these manifolds are the hyperplanes aαi = a α j ∀α, for each pair of hidden units i, j (similarly, the hyperplanes bαi = b α j ∀α) that make the model non-identifiable. Formally, these correspond to the singularities of the Hessian or the Fisher information matrix. Indeed, we shall quickly check below that when aαi = a α j ∀α for any pair of hidden units i, j, the Hessian becomes singular (overlap singularities). The Hessian also has additional singularities at the hyperplanes aαi = 0 ∀α for any i and at bαi = 0 ∀α for any i (elimination singularities). Starting from the energy function in Equation 22 and taking the derivative with respect to a single input-to-hidden layer weight, aαi : ∂E ∂aαi = −(sα − aα · bα)bαi + ∑ β 6=α (aα · bβ)bβi (23) and the second derivatives are as follows: ∂2E ∂(aαi ) 2 = (bαi ) 2 + ∑ β 6=α (bβi ) 2 = ∑ β (bβi ) 2 (24) ∂2E ∂aαi ∂a α j = bαj b α i + ∑ β 6=α bβj b β i = ∑ β bβi b β j (25) Note that the second derivatives are independent of mode index α, reflecting the fact that the energy function is invariant to a permutation of the mode indices. Furthermore, when bβi = b β j for all β, the columns in the Hessian corresponding to aαi and a α j become identical, causing an additional degeneracy reflecting the non-identifiability of aαi and a α j . A similar derivation establishes that aβi = a β j for all β also leads to a degeneracy in the Hessian, this time reflecting the non-identifiability of bαi and b α j . These correspond to the overlap singularities. In addition, it is easy to see from Equations 24, 25 that when bαi = 0 ∀α, the right-hand sides of both equations become identically zero, reflecting the non-identifiability of aαi for all α. A similar derivation shows that when aαi = 0 ∀α, the columns of the Hessian corresponding to bαi become identically zero for all α, this time reflecting the non-identifiability of bαi for all α. These correspond to the elimination singularities. When we add skip connections between adjacent layers, i.e. in the residual architecture, the energy function changes as follows: E = 1 2 ∑ α (sα − (aα + vα) · (bα + uα))2 + 1 2 ∑ α6=β ((aα + vα) · (bβ + uβ))2 (26) and straightforward algebra yields the following second derivatives: ∂2E ∂(aαi ) 2 = ∑ β (bβi + u β i ) 2 (27) ∂2E ∂aαi ∂a α j = ∑ β (bβi + u β i )(b β j + u β j ) (28) Unlike in the plain network, setting bβi = b β j for all β, or setting b α i = 0 ∀α, does not lead to a degeneracy here, thanks to the orthogonal skip vectors uβ . However, this just shifts the locations of the singularities. In particular, the residual network suffers from the same overlap and elimination singularities as the plain network when we make the following change of variables: bβ → bβ − uβ and aβ → aβ − vβ . Networks with more than three-layers: As shown in Saxe et al. (2013), in linear networks with more than a single hidden layer, assuming that there are orthogonal matrices Rl and Rl+1 for each layer l that diagonalize the initial weight matrix of the corresponding layer (i.e. R>l+1Wl(0)Rl = Dl is a diagonal matrix), dynamics of different singular modes decouple from each other and each -2 0 2 a -2 0 2 b a Plain -2 0 2 -2 0 2 Residual -2 0 2 -2 0 2 Hyper−residual 0 200 400 Time 0 1 2 u b hyper-residual residual plain Figure S5: (a) Phase portraits for three-layer plain, residual and hyper-residual linear networks. (b) Evolution of u = ∏Nl−1 l=1 al for 10-layer plain, residual and hyper-residual linear networks. In the plain network, u did not converge to its asymptotic value s within the simulated time window. mode α evolves according to gradient descent dynamics in an energy landscape described by (Saxe et al., 2013): Eplain = 1 2τ ( sα − Nl−1∏ l=1 aαl )2 (29) where aαl can be interpreted as the strength of mode α at layer l and Nl is the total number of layers. In residual networks, assuming further that the orthogonal matrices Rl satisfy R>l+1Rl = I, the energy function changes to: Eres = 1 2τ ( sα − Nl−1∏ l=1 (aαl + 1) )2 (30) and in hyper-residual networks, it is: Ehyperres = 1 2τ ( sα − Nl−1∏ l=1 (aαl + l) )2 (31) Figure S5a illustrates the effect of skip connections on the phase portrait of a three layer network. The two axes, a and b, represent the mode strength variables for l = 1 and l = 2, respectively: i.e. a ≡ aα1 and b ≡ aα2 . The plain network has a saddle point at (0, 0) (Figure S5a; left). The dynamics around this point is slow, hence starting from small random values causes initially very slow learning. The network funnels the dynamics through the unstable manifold a = b to the stable hyperbolic solution corresponding to ab = s. Identity skip connections between adjacent layers in the residual architecture move the saddle point to (−1,−1) (Figure S5a; middle). This speeds up the dynamics around the origin, but not as much as in the hyper-residual architecture where the saddle point is moved further away from the origin and the main diagonal to (−1,−2) (Figure S5a; right). We found these effects to be more pronounced in deeper networks. Figure S5b shows the dynamics of learning in 10-layer linear networks, demonstrating a clear advantage for the residual architecture over the plain architecture and for the hyper-residual architecture over the residual architecture. Singularity of the Hessian in reduced linear multilayer networks with skip connections: The derivative of the cost function of a linear multilayer residual network (Equation 30) with respect to the mode strength variable at layer i, ai, is given by (suppressing the mode index α and taking τ = 1): ∂E ∂ai = −(s− u) ∏ l 6=i (al + 1) (32) and the second derivatives are: ∂2E ∂a2i = [∏ l 6=i (al + 1) ]2 (33) ∂2E ∂ai∂ak = [ 2 ∏ l (al + 1)− s ] ∏ l 6=i,k (al + 1) (34) It is easy to check that the columns (or rows) corresponding to ai and aj in the Hessian become identical when ai = aj , making the Hessian degenerate. The hyper-residual architecture does not eliminate these degeneracies but shifts them to different locations in the parameter space by adding distinct constants to ai and aj (and to all other variables). SUPPLEMENTARY NOTE 6: DESIGNING SKIP CONNECTIVITY MATRICES WITH VARYING DEGREES OF ORTHOGONALITY AND WITH EIGENVALUES ON THE UNIT CIRCLE We generated the covariance matrix of the eigenvectors by S = QΛQ>, where Q is a random orthogonal matrix and Λ is the diagonal matrix of eigenvalues, Λii = exp(−τ(i−1)), as explained in the main text. We find the correlation matrix through R = D−1/2SD−1/2 where D is the diagonal matrix of the variances: i.e. Dii = Sii. We take the Cholesky decomposition of the correlation matrix, R = TT>. Then the designed skip connectivity matrix is given by Σ = TULU−1T−1, where L and U are the matrices of eigenvalues and eigenvectors of another randomly generated orthogonal matrix, O: i.e. O = ULU>. With this construction, Σ has the same eigenvalue spectrum as O, however the eigenvectors of Σ are linear combinations of the eigenvectors of O such that their correlation matrix is given by R. Thus, the eigenvectors of Σ are not orthogonal to each other unless τ = 0. Larger values of τ yield more correlated, hence less orthogonal, eigenvectors. SUPPLEMENTARY NOTE 7: VALIDATION OF THE RESULTS WITH SHALLOW NETWORKS To further demonstrate the generality of our results and the independence of the problem of singularities from the vanishing gradients problem in optimization, we performed an experiment with shallow plain and residual networks with only two hidden layers and 16 units in each hidden layer. Because we do not allow skip connections from the input layer, a network with two hidden layers is the shallowest network we can use to compare the plain and residual architectures. Figure S6 shows the results of this experiment. The residual network performs slightly better both on the training and test data (Figure S6a-b); it is less degenerate (Figure S6d) and has more negative eigenvalues (Figure S6c); it has larger gradients (Figure S6e) —note that the gradients in the plain network do not vanish even at the beginning of training— and its hidden units have less overlap than the plain network (Figure S6f). Moreover, the gradient norms closely track the mean overlap between the hidden units and the degeneracy of the network (Figure S6d-f) throughout training. These results suggest that the degeneracies caused by the overlaps of hidden units slow down learning, consistent with our symmetry-breaking hypothesis and with the results from larger networks. 0 40 T ra in in g  ac c.  ( % ) a Plain Residual 0 40 T es t a cc . ( % ) b 0 5 N eg at iv e  ei gs . ( % ) c 75 100 D eg en er at e  ei gs . ( % ) d 0 5 10 15 20 25 Epoch 0 0.0015 M ea n  gr ad . n or m e Layer 1 Layer 2 0 5 10 15 20 25 Epoch 0.02 0 0.02 M ea n  ov er la p f Figure S6: Main results hold for two-layer shallow nets trained on CIFAR-100. (a-b) Training and test accuracies. Residual nets perform slightly better. (c-d) Fraction of negative and degenerate eigenvalues. Residual nets are less degenerate. (e) Mean gradient norms with respect to the two layer activations throughout training. (f) Mean overlap for the second hidden layer units, measured as the mean correlation between the incoming weights of the hidden units. Results in (a-e) are averages over 16 independent runs; error bars are small, hence not shown for clarity. In (f), error bars represent standard errors. 0 10 20 Epoch 0 50 100 T r.  a cc ur ac y  (% )a 30 layers (SVHN) Best 10 Worst 10 0 10 20 Epoch 99.99 100 D eg . e ig s.  ( % ) 0 10 20 Epoch 1 1.1 1.2 M ea n  no rm 0 10 20 Epoch 0 0.01 M ea n  ov er la p 0 10 20 Epoch 0.5 1 Li n.  d ep en de nc e 0 25 50 Epoch 0 50 100 T r.  a cc ur ac y  (% )b 30 layers (STL10) Best 10 Worst 10 0 25 50 Epoch 99.999 100 D eg . e ig s.  ( % ) 0 25 50 Epoch 1 1.1 M ea n  no rm 0 25 50 Epoch 0 0.02 M ea n  ov er la p 0 25 50 Epoch 0.5 1 Li n.  d ep en de nc e Figure S7: Replication of the results reported in Figure 4 for (a) the Street View House Numbers (SVHN) dataset and (b) the STL-10 dataset.
1. What are the main contributions of the paper regarding deep neural networks? 2. What are the key findings of the authors regarding singularities in deep neural networks? 3. How do skip connections help in reducing singularities and improving learning? 4. Are there any suggestions for future work related to tailoring skip connection matrices for better performance? 5. Can the authors provide procedures to create highly trainable networks using their proposed method?
Review
Review The authors show that two types of singularities impede learning in deep neural networks: elimination singularities (where a unit is effectively shut off by a loss of input or output weights, or by an overly-strong negative bias), and overlap singularities, where two or more units have very similar input or output weights. They then demonstrate that skip connections can reduce the prevalence of these singularities, and thus speed up learning. The analysis is thorough: the authors explore alternative methods of reducing the singularities, and explore the skip connection properties that more strongly reduce the singularities, and make observations consistent with their overarching claims. I have no major criticisms. One suggestion for future work would be to provide a procedure for users to tailor their skip connection matrices to maximize learning speed and efficacy. The authors could then use this procedure to make highly trainable networks, and show that on test (not training) data, the resultant network leads to high performance.
ICLR
Title Skip Connections Eliminate Singularities Abstract Skip connections made the training of very deep networks possible and have become an indispensable component in a variety of neural architectures. A completely satisfactory explanation for their success remains elusive. Here, we present a novel explanation for the benefits of skip connections in training very deep networks. The difficulty of training deep networks is partly due to the singularities caused by the non-identifiability of the model. Several such singularities have been identified in previous works: (i) overlap singularities caused by the permutation symmetry of nodes in a given layer, (ii) elimination singularities corresponding to the elimination, i.e. consistent deactivation, of nodes, (iii) singularities generated by the linear dependence of the nodes. These singularities cause degenerate manifolds in the loss landscape that slow down learning. We argue that skip connections eliminate these singularities by breaking the permutation symmetry of nodes, by reducing the possibility of node elimination and by making the nodes less linearly dependent. Moreover, for typical initializations, skip connections move the network away from the “ghosts” of these singularities and sculpt the landscape around them to alleviate the learning slow-down. These hypotheses are supported by evidence from simplified models, as well as from experiments with deep networks trained on real-world datasets. 1 INTRODUCTION Skip connections are extra connections between nodes in different layers of a neural network that skip one or more layers of nonlinear processing. The introduction of skip (or residual) connections has substantially improved the training of very deep neural networks (He et al., 2015; 2016; Huang et al., 2016; Srivastava et al., 2015). Despite informal intuitions put forward to motivate skip connections, a clear understanding of how these connections improve training has been lacking. Such understanding is invaluable both in its own right and for the possibilities it might offer for further improvements in training very deep neural networks. In this paper, we attempt to shed light on this question. We argue that skip connections improve the training of deep networks partly by eliminating the singularities inherent in the loss landscapes of deep networks. These singularities are caused by the non-identifiability of subsets of parameters when nodes in the network either get eliminated (elimination singularities), collapse into each other (overlap singularities) (Wei et al., 2008), or become linearly dependent (linear dependence singularities). Saad & Solla (1995); Amari et al. (2006); Wei et al. (2008) identified the elimination and overlap singularities and showed that they significantly slow down learning in shallow networks; Saxe et al. (2013) showed that linear dependence between nodes arises generically in randomly initialized deep linear networks and becomes more severe with depth. We show that skip connections eliminate these singularities and provide evidence suggesting that they improve training partly by ameliorating the learning slow-down caused by the singularities. 2 RESULTS 2.1 SINGULARITIES IN FULLY-CONNECTED LAYERS AND HOW SKIP CONNECTIONS BREAK THEM In this work, we focus on three types of singularity that arise in fully-connected layers: elimination and overlap singularities (Amari et al., 2006; Wei et al., 2008), and linear dependence singularities Ja =Jb Ja =Jb =αJ +βJa bJ=0 J=0 identifiable? yesno ... ww =0 w=0 J J ... bw wa wb ... ... w wa ... Elimination singularities a b c Overlap singularity ... skip Linear dependence singularity Ja Jc wcwa wb Ja Jc JbJb wcwa wb Figure 1: Singularities in a fully connected layer and how skip connections break them. (a) In elimination singularities, zero incoming weights, J = 0, eliminate units and make outgoing weights, w, non-identifiable (red). Skip connections (blue) ensure units are active at least sometimes, so the outgoing weights are identifiable (green). The reverse holds for zero outgoing weights, w = 0: skip connections recover identifiability for J . (b) In overlap singularities, overlapping incoming weights, Ja = Jb, make outgoing weights non-identifiable; skip connections again break the degeneracy. (c) In linear dependence singularities, a subset of the hidden units become linearly dependent, making their outgoing weights non-identifiable; skip connections break the linear dependence. (Saxe et al., 2013). The linear dependence singularities can arise exactly only in linear networks, whereas the elimination and overlap singularities can arise in non-linear networks as well. These singularities are all related to the non-identifiability of the model. The Hessian of the loss function becomes singular at these singularities (Supplementary Note 1), hence they are sometimes also called degenerate or higher-order saddles (Anandkumar & Ge, 2016). Elimination singularities arise when a hidden unit is effectively killed, e.g. when its incoming (or outgoing) weights become zero (Figure 1a). This makes the outgoing (or incoming) connections of the unit non-identifiable. Overlap singularities are caused by the permutation symmetry of the hidden units at a given layer and they arise when two units become identical, e.g. when their incoming weights become identical (Figure 1b). In this case, the outgoing connections of the units are no longer identifiable individually (only their sum is identifiable). Linear dependence singularities arise when a subset of the hidden units in a layer become linearly dependent (Figure 1c). Again, the outgoing connections of these units are no longer identifiable individually (only a linear combination of them is identifiable). How do skip connections eliminate these singularities? Skip connections between adjacent layers break the elimination singularities by ensuring that the units are active at least for some inputs, even when their adjustable incoming or outgoing connections become zero (Figure 1a; right). They eliminate the overlap singularities by breaking the permutation symmetry of the hidden units at a given layer (Figure 1b; right). Thus, even when the adjustable incoming weights of two units become identical, the units do not collapse into each other, since their distinct skip connections still disambiguate them. They also eliminate the linear dependence singularities by adding linearly independent (in fact, orthogonal in most cases) inputs to the units (Figure 1c; right). 2.2 WHY ARE SINGULARITIES HARMFUL FOR LEARNING? The effect of elimination and overlap singularities on gradient-based learning has been analyzed previously for shallow networks (Amari et al., 2006; Wei et al., 2008). Figure 2a shows the simplified two hidden unit model analyzed in Wei et al. (2008) and its reduction to a two-dimensional system in terms of the overlap and elimination variables, h and z. Both types of singularity cause degenerate manifolds in the loss landscape, represented by the lines h = 0 and z = ±1 in Figure 2b, corresponding to the overlap and elimination singularities respectively. The elimination manifolds divide the overlap manifolds into stable and unstable segments. According to the analysis presented in Wei et al. (2008), these manifolds give rise to two types of plateaus in the learning dynamics: on-singularity plateaus which are caused by the random walk behavior of stochastic gradient descent (SGD) along a stable segment of the overlap manifolds (thick segment on the h = 0 line in Figure 2b) until it escapes the stable segment, and (more relevant in practical cases) near-singularity plateaus which manifest themselves as a general slowing of the dynamics near the overlap manifolds, even when the initial location is not within the basin of attraction of the stable segment. Although this analysis only holds for two hidden units, for higher dimensional cases, it suggests that overlaps between hidden units significantly slow down learning along the overlap directions. These overlap directions become more numerous as the number of hidden units increases, thus reducing the effective dimensionality of the model. We provide empirical evidence for this claim below. As mentioned earlier, linear dependence singularities arise exactly only in linear networks. However, we expect them to hold approximately, and thus have consequences for learning, in the non-linear case as well. Figure 2d-e shows an example in a toy single-layer nonlinear network: learning along a linear dependence manifold, represented by m here, is much slower than learning along other directions, e.g. the norm of the incoming weight vector Jc in the example shown here. Saxe et al. (2013) demonstrated that this linear dependence problem arises generically, and becomes worse with depth, in randomly initialized deep linear networks. Because learning is significantly slowed down along linear dependence directions compared to other directions, these singularities effectively reduce the dimensionality of the model, similarly to the overlap manifolds. 2.3 PLAIN NETWORKS ARE MORE DEGENERATE THAN NETWORKS WITH SKIP CONNECTIONS To investigate the relationship between degeneracy, training difficulty and skip connections in deep networks, we conducted several experiments with deep fully-connected networks. We compared three different architectures. (i) The plain architecture is a fully-connected feedforward network with no skip connections, described by the equation: xl+1 = f(Wlxl + bl+1) l = 0, . . . , L− 1 where f is the ReLU nonlinearity and x0 denotes the input layer. (ii) The residual architecture introduces identity skip connections between adjacent layers (note that we do not allow skip connections from the input layer): x1 = f(W0x0 + b1), xl+1 = f(Wlxl + bl+1) + xl l = 1, . . . , L− 1 (iii) The hyper-residual architecture adds skip connections between each layer and all layers above it: x1 = f(W0x0+b1), x2 = f(W1x1+b2)+x1, xl+1 = f(Wlxl+bl+1)+xl+ 1 l − 1 l−1∑ k=1 Qkxk l = 2, . . . , L−1 The skip connectivity from the immediately preceding layer is always the identity matrix, whereas the remaining skip connections Qk are fixed, but allowed to be different from the identity (see Supplementary Note 2 for further details). This architecture is inspired by the DenseNet architecture (Huang et al., 2016). In both architectures, each layer projects skip connections to layers above it. However, in the DenseNet architecture, the skip connectivity matrices are learned, whereas in the hyper-residual architecture considered here, they are fixed. In the experiments of this subsection, the networks all had L = 20 hidden layers (followed by a softmax layer at the top) and n = 128 hidden units (ReLU) in each hidden layer. Hence, the networks had the same total number of parameters. The biases were initialized to 0 and the weights were initialized with the Glorot normal initialization scheme (Glorot & Bengio, 2010). The networks were trained on the CIFAR-100 dataset (with coarse labels) using the Adam optimizer (Kingma & Ba, 2014) with learning rate 0.0005 and a batch size of 500. Because we are mainly interested in understanding how singularities, and their removal, change the shape of the loss landscape and consequently affect the optimization difficulty, we primarily monitor the training accuracy rather than test accuracy in the results reported below. To measure degeneracy, we estimated the eigenvalue density of the Hessian during training for the three different network architectures. The probability of small eigenvalues in the eigenvalue density reflects the dimensionality of the degenerate parameter space. To estimate this eigenvalue density in our ∼ 1M-dimensional parameter spaces, we first estimated the first four moments of the spectral density using the method of Skilling (Skilling, 1989) and fit the estimated moments with a flexible mixture density model (see Supplementary Note 3 for details) consisting of a narrow Gaussian component to capture the bulk of the spectral density, and a skew Gaussian density to capture the tails (see Figure 3d for example fits). From the fitted mixture density, we estimated the fraction of degenerate eigenvalues and the fraction of negative eigenvalues during training. We validated our main results, as well as our mixture model for the spectral density, with smaller networks with ∼ 14K parameters where we could calculate all eigenvalues of the Hessian numerically (Supplementary Note 4). For these smaller networks, the mixture model slightly underestimated the fraction of degenerate eigenvalues and overestimated the fraction of negative eigenvalues; however, there was a highly significant linear relationship between the actual and estimated fractions. Figure 3b shows the evolution of the fraction of degenerate eigenvalues during training. A large value at a particular point during optimization indicates a more degenerate model. By this measure, the hyper-residual architecture is the least degenerate and the plain architecture is the most degenerate. We observe the opposite pattern for the fraction of negative eigenvalues (Figure 3c). The differences between the architectures are more prominent early on in the training and there is an indication of a crossover later during training, with less degenerate models early on becoming slightly more degenerate later on as the training performance starts to saturate (Figure 3b). Importantly, the hyper-residual architecture has the highest training speed and the plain architecture has the lowest training speed (Figure 3a), consistent with our hypothesis that the degeneracy of a model increases the training difficulty and skip connections reduce the degeneracy. 2.4 TRAINING ACCURACY IS RELATED TO DISTANCE FROM DEGENERATE MANIFOLDS To establish a more direct relationship between the elimination, overlap and linear dependence singularities discussed earlier on the one hand, and model degeneracy and training difficulty on the other, we exploited the natural variability in training the same model caused by the stochasticity of stochastic gradient descent (SGD) and random initialization. Specifically, we trained 100 plain networks (30 hidden layers, 128 neurons per layer) on CIFAR-100 using different random initializations and random mini-batch selection. Training performance varied widely across runs. We compared the best 10 and the worst 10 runs (measured by mean accuracy over 100 training epochs, Figure 4a). The worst networks were more degenerate (Figure 4b); they were significantly closer to elimination singularities, as measured by the average l2-norm of the incoming weights of their hidden units (Figure 4c); they were significantly closer to overlap singularities (Figure 4d), as measured by the mean correlation between the incoming weights of their hidden units; and their hidden units were significantly more linearly dependent (Figure 4e), as measured by the mean variance explained by the top three eigenmodes of the covariance matrices of the hidden units in the same layer. 2.5 BENEFITS OF SKIP CONNECTIONS AREN’T EXPLAINED BY GOOD INITIALIZATION ALONE To investigate if the benefits of skip connections can be explained in terms of favorable initialization of the parameters, we introduced a malicious initialization scheme for the residual network by subtracting the identity matrix from the initial weight matrices, Wl. If the benefits of skip connections can be explained primarily by favorable initialization, this malicious initialization would be expected to cancel the effects of skip connections at initialization and hence significantly deteriorate the performance. However, the malicious initialization only had a small adverse effect on the performance of the residual network (Figure 5; ResMalInit), suggesting that the benefits of skip connections cannot be explained by favorable initialization alone. This result reveals a fundamental weakness in previous explanations of the benefits of skip connections based purely on linear models (Hardt & Ma, 2016; Li et al., 2016). In Supplementary Note 5 we show that skip connections do not eliminate the singularities in deep linear networks, but only shift the landscape so that typical initializations are farther from the singularities. Thus, in linear networks, any benefits of skip connections are due entirely to better initialization. In contrast, skip connections genuinely eliminate the singularities in nonlinear networks (Supplementary Note 1). The fact that malicious initialization of the residual network does reduce its performance suggests that “ghosts” of these singularities still exist in the loss landscape of nonlinear networks, but the performance reduction is only slight, suggesting that skip connections alter the landscape around these ghosts to alleviate the learning slow-down that would otherwise take place near them. 2.6 ALTERNATIVE WAYS OF ELIMINATING THE SINGULARITIES If the success of skip connections can be attributed, at least partly, to eliminating singularities, then alternative ways of eliminating them should also improve training. We tested this hypothesis by introducing a particularly simple way of eliminating singularities: for each layer we drew random target biases from a Gaussian distribution, N (µ, σ), and put an l2-norm penalty on learned biases deviating from those targets. This breaks the permutation symmetry between units and eliminates the overlap singularities. In addition, positive µ values decrease the average threshold of the units and make the elimination of units less likely (but not impossible), hence reducing the elimination singularities. Decreased thresholds can also increase the dimensionality of the responses in a given layer by reducing the fraction of times different units are identically zero, thereby making them less linearly dependent. Note that setting µ = 0 and σ = 0 corresponds to the standard l2-norm regularization of the biases, which does not eliminate any of the overlap or elimination singularities. Hence, we expect the performance to be worse in this case than in cases with properly eliminated singularities. On the other hand, although in general, larger values of µ and σ correspond to greater elimination of singularities, the network also has to perform well in the classification task and very large µ, σ values might be inconsistent with the latter requirement. Therefore, we expect the performance to be optimal for intermediate values of µ and σ. In the experiments reported below, we optimized the hyperparameters µ, σ, and λ, i.e. the mean and the standard deviation of the target bias distribution and the strength of the bias regularization term, through random search (Bergstra & Bengio, 2012). We trained 30-layer fully-connected feedforward networks on CIFAR-10 and CIFAR-100 datasets. Figure 5a-b shows the training accuracy of different models on the two datasets. For both datasets, among the models shown in Figure 5, the residual network performs the best and the plain network the worst. Our simple singularity elimination through bias regularization scheme (BiasReg, cyan) significantly improves performance over the plain network. Importantly, the standard l2-norm regularization on the biases (BiasL2Reg (µ = 0, σ = 0), magenta) does not improve performance over the plain network. These results are consistent with the singularity elimination hypothesis. There is still a significant performance gap between our BiasReg network and the residual network despite the fact that both break degeneracies. This can be partly attributed to the fact that the residual network breaks the degeneracies more effectively than the BiasReg network (Figure 5c). Secondly, even in models that completely eliminate the singularities, the learning speed would still depend on the behavior of the gradient norms, and the residual network fares better than the BiasReg network in this respect as well. At the beginning of training, the gradient norms with respect to the layer activities do not diminish in earlier layers of the residual network (Figure 6a, Epoch 0), demonstrating that it effectively solves the vanishing gradients problem (Hochreiter, 1991; Bengio et al., 1994). On the other hand, both in the plain network and in the BiasReg network, the gradient norms decay quickly as one descends from the top of the network. Moreover, as training progresses (Figure 6a, Epochs 1 and 2), the gradient norms are larger for the residual network than for the plain or the BiasReg network. Even for the maliciously initialized residual network, gradients do not decay quickly at the beginning of training and the gradient norms behave similarly to those of the residual network during training (Figure 6a; ResMalInit), suggesting that skip connections boost the gradient norms near the ghosts of the singularities and reduce the learning slow-down that would otherwise take place near them. Adding a single batch normalization layer (Ioffe & Szegedy, 2015) in the middle of the BiasReg network alleviates the vanishing gradients problem for this network and brings its performance closer to that of the residual network (Figure 6a-b; BiasReg+BN). 2.7 NON-IDENTITY SKIP CONNECTIONS If the singularity elimination hypothesis is correct, there should be nothing special about identity skip connections. Skip connections other than identity should lead to training improvements if they eliminate singularities. For the permutation symmetry breaking of the hidden units, ideally the skip connection vector for each unit should disambiguate that unit maximally from all other units in that layer. This is because as shown by the analysis in Wei et al. (2008) (Figure 2), even partial overlaps between hidden units significantly slow down learning (near-singularity plateaus). Mathematically, the maximal disambiguation requirement corresponds to an orthogonality condition on the skip connectivity matrix (any full-rank matrix breaks the permutation symmetry, but only orthogonal matrices maximally disambiguate the units). Adding orthogonal vectors to different hidden units is also useful for breaking potential (exact or approximate) linear dependencies between them. We therefore tested random dense orthogonal matrices as skip connectivity matrices. Random dense orthogonal matrices performed slightly better than identity skip connections in both CIFAR-10 and CIFAR-100 datasets (Figure 7a, black vs. blue). This is because, even with skip connections, units can be deactivated for some inputs because of the ReLU nonlinearity (recall that we do not allow skip connections from the input layer). When this happens to a single unit at layer l, that unit is effectively eliminated for that subset of inputs, hence eliminating the skip connection to the corresponding unit at layer l+1, if the skip connectivity is the identity. This causes a potential elimination singularity for that particular unit. With dense skip connections, however, this possibility is reduced, since all units in the previous layer are used. Moreover, when two distinct units at layer l are deactivated together, the identity skips cannot disambiguate the corresponding units at the next layer, causing a potential overlap singularity. On the other hand, with dense orthogonal skips, because all units at layer l are used, even if some of them are deactivated, the units at layer l + 1 can still be disambiguated with the remaining active units. Figure 7b confirms for the CIFAR-100 dataset that throughout most of the training, the hidden units of the network with dense orthogonal skip connections have a lower probability of zero responses than those of the network with identity skip connections. Next, we gradually decreased the degree of “orthogonality” of the skip connectivity matrix to see how the orthogonality of the matrix affects performance. Starting from a random dense orthogonal matrix, we first divided the matrix into two halves and copied the first half to the second half. Starting from n orthonormal vectors, this reduces the number of orthonormal vectors to n/2. We continued on like this until the columns of the matrix were repeats of a single unit vector. We predict that as the number of orthonormal vectors in the skip connectivity matrix is decreased, the performance should deteriorate, because both the permutation symmetry-breaking capacity and the linear-dependencebreaking capacity of the skip connectivity matrix are reduced. Figure 7 shows the results for n = 128 hidden units. Darker colors correspond to “more orthogonal” matrices (e.g. “128” means all 128 skip vectors are orthonormal to each other, “1” means all 128 vectors are identical). The blue line is the identity skip connectivity. More orthogonal skip connectivity matrices yield better performance, consistent with our hypothesis. The less orthogonal skip matrices also suffer from the vanishing gradients problem. So, their failure could be partly attributed to the vanishing gradients problem. To control for this effect, we also designed skip connectivity matrices with eigenvalues on the unit circle (hence with eigenvalue spectra equivalent to an orthogonal matrix), but with varying degrees of orthogonality (see Supplementary Note 6 for details). More specifically, the columns (or rows) of an orthogonal matrix are orthonormal to each other, hence the covariance matrix of these vectors is the identity matrix. We designed matrices where this covariance matrix was allowed to have non-zero off-diagonal values, reflecting the fact that the vectors are not orthogonal any more. By controlling the magnitude of the correlations between the vectors, we manipulated the degree of orthogonality of the vectors. We achieved this by setting the eigenvalue spectrum of the covariance matrix to be given by λi = exp(−τ(i−1)) where λi denotes the i-th eigenvalue of the covariance matrix and τ is the parameter that controls the degree of orthogonality: τ = 0 corresponds to the identity covariance matrix, hence to an orthonormal set of vectors, whereas larger values of τ correspond to gradually more correlated vectors. This orthogonality manipulation was done while fixing the eigenvalue spectrum of the skip connectivity matrix to be on the unit circle. Hence, the effects of this manipulation cannot be attributed to any change in the eigenvalue spectrum, but only to the degree of orthogonality of the skip vectors. The results of this experiment are shown in Figure 8. More orthogonal skip connectivity matrices still perform better than less orthogonal ones (Figure 8c-d), even when their eigenvalue spectrum is fixed and the vanishing gradients problem does not arise (Figure 8b), suggesting that the results of the earlier experiment (Figure 7) cannot be explained solely by the vanishing gradients problem. 3 DISCUSSION In this paper, we proposed a novel explanation for the benefits of skip connections in terms of the elimination of singularities. Our results suggest that elimination of singularities contributes at least partly to the success of skip connections. However, we emphasize that singularity elimination is not the only factor explaining the benefits of skip connections. Even in completely non-degenerate models, other independent factors such as the behavior of gradient norms would affect training performance. Indeed, we presented evidence suggesting that skip connections are also quite effective at dealing with the problem of vanishing gradients and not every form of singularity elimination can be expected to be equally good at dealing with such additional problems that beset the training of deep networks. Alternative explanations: Several of our experiments rule out vanishing gradients as the sole explanation for training difficulties in deep networks and strongly suggest an independent role for the singularities arising from the non-identifiability of the model. (i) In Figure 4, all nets have the exact same plain architecture and similarly vanishing gradients at the beginning of training, yet they have diverging performances correlated with measures of distance from singular manifolds. (ii) Vanishing gradients cannot explain the difference between identity skips and dense orthogonal skips in Figure 7, because both eliminate vanishing gradients, yet dense orthogonal skips perform better. (iii) In Figure 8, spectrum-equalized non-orthogonal skips often have larger gradient norms, yet worse performance than orthogonal skips. (iv) Vanishing gradients cannot even explain the BiasReg results in Figure 5. The BiasReg and the plain net have almost identical (and vanishing) gradients early on in training (Figure 6a), yet the former has better performance as predicted by the symmetry-breaking hypothesis. (v) Similar results hold for two-layer shallow networks where the problem of vanishing gradients does not arise (Supplementary Note 7). In particular, shallow residual nets are less degenerate and have better accuracy than shallow plain nets; moreover, gradient norms and accuracy are strongly correlated with distance from the overlap manifolds in these shallow nets. Our malicious initialization experiment with residual nets (Figure 5) suggests that the benefits of skip connections cannot be explained solely in terms of well-conditioning or improved initialization either. This result reveals a fundamental weakness in purely linear explanations of the benefits of skip connections (Hardt & Ma, 2016; Li et al., 2016). Unlike in nonlinear nets, improved initialization entirely explains the benefits of skip connections in linear nets (Supplementary Note 5). A recent paper (Balduzzi et al., 2017) suggested that the loss of spatial structure in the covariance of the gradients, a phenomenon called “shattered gradients”, could be partly responsible for training difficulties in deep nonlinear networks. They argued that skip connections alleviate this problem by essentially making the model “more linear”. It is easy to see that the shattered gradients problem is distinct from both the vanishing/exploding gradients problem and the degeneracy problems considered in this paper, since shattered gradients arise only in sufficiently non-linear deep networks (linear networks do not shatter gradients), whereas vanishing/exploding gradients, as well as the degeneracies considered here, arise in linear networks too. The relative contribution of each of these distinct problems to training difficulties in deep networks remains to be determined. Symmetry-breaking in other architectures: We only reported results from experiments with fullyconnected networks, but we note that limited receptive field sizes and weight sharing between units in a single feature channel in convolutional neural networks also reduce the permutation symmetry in a given layer. The symmetry is not entirely eliminated since although individual units do not have permutation symmetry in this case, feature channels do, but they are far fewer in number than the number of individual units. Similarly, a recent extension of the residual architecture called ResNeXt (Xie et al., 2016) uses parallel, segregated processing streams inside the “bottleneck” blocks, which can again be seen as a way of reducing the permutation symmetry inside the block. Our method of singularity reduction through bias regularization (BiasReg; Figure 5) can be thought of as indirectly putting a prior over the unit activities. More complicated joint priors over hidden unit responses that favor decorrelated (Cogswell et al., 2015) or clustered (Liao et al., 2016) responses have been proposed before. Although the primary motivation for these regularization schemes was to improve the generalizability or interpretability of the learned representations, they can potentially be understood from a singularity elimination perspective as well. For example, a prior that favors decorrelated responses can facilitate the breaking of permutation symmetries and linear dependencies between hidden units. Our results lead to an apparent paradox: over-parametrization and redundancy in large neural network models have been argued to make optimization easier. Yet, our results seem to suggest the opposite. However, there is no contradiction here. Any apparent contradiction is due to potential ambiguities in the meanings of the terms “over-parametrization” and “redundancy”. The intuition behind the benefits of over-parametrization for optimization is an increase in the effective capacity of the model: over-parametrization in this sense leads to a large number of approximately equally good ways of fitting the training data. On the other hand, the degeneracies considered in this paper reduce the effective capacity of the model, leading to optimization difficulties. Our results suggest that it could be useful for neural network researchers to pay closer attention to the degeneracies inherent in their models. For better optimization, as a general design principle, we recommend reducing such degeneracies in a model as much as possible. Once the training performance starts to saturate, however, degeneracies may help the model achieve a better generalization performance. Exploring this trade-off between the harmful and beneficial effects of degeneracies is an interesting direction for future work. Acknowledgments: AEO and XP were supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government. SUPPLEMENTARY MATERIALS SUPPLEMENTARY NOTE 1: SINGULARITY OF THE HESSIAN IN NON-LINEAR MULTILAYER NETWORKS Because the cost function can be expressed as a sum over training examples, it is enough to consider the cost for a single example: E = 12 ||y − xL|| 2 ≡ 12e >e, where xl are defined recursively as xl = f(Wl−1xl−1) for l = 1, . . . , L. We denote the inputs to units at layer l by the vector hl: hl = Wl−1xl−1. We ignore the biases for simplicity. The derivative of the cost function with respect to a single weight Wl,ij between layers l and l + 1 is given by: ∂E ∂Wl,ij = − 0 ... f ′(hl+1,i)xl,j ... 0 > W>l+1diag(f ′ l+2)W > l+2diag(f ′ l+3) · · ·W>L−1diag(f ′L)e (1) Now, consider a different connection between the same output unit i at layer l + 1 and a different input unit j′ at layer l. The crucial thing to note is that if the units j and j′ have the same set of incoming weights, then the derivative of the cost function with respect to Wl,ij becomes identical to its derivative with respect to Wl,ij′ : ∂E/∂Wl,ij = ∂E/∂Wl,ij′ . This is because in this condition xl,j′ = xl,j for all possible inputs and all the remaining terms in Equation 1 are independent of the input index j. Thus, the columns (or rows) corresponding to the connections Wl,ij and Wl,ij′ in the Hessian become identical, making the Hessian degenerate. This is a re-statement of the simple observation that when the units j and j′ have the same set of incoming weights, the parameters Wl,ij and Wl,ij′ become non-identifiable (only their sum is identifiable). Thus, this corresponds to an overlap singularity. A similar argument shows that when a set of units at layer l, say units indexed j, j′, j′′ become linearly dependent, the columns of the Hessian corresponding to the weights Wl,ij , Wl,ij′ and Wl,ij′′ become linearly dependent as well, thereby making the Hessian singular. Again, this is just a re-statement of the fact that these weights are no longer individually identifiable in this case (only a linear combination of them is identifiable). This corresponds to a linear dependence singularity. In non-linear networks, except in certain degenerate cases where the units saturate together, they may never be exactly linearly dependent, but they can be approximately linearly dependent, which makes the Hessian close to singular. Moreover, it is easy to see from Equation 1 that, when the presynaptic unit xl,j is always zero, i.e. when that unit is effectively killed, the column (or row) of the Hessian corresponding to the parameter Wl,ij becomes the zero vector for any i, and thus the Hessian becomes singular. This is a re-statement of the simple observation that when the unit xl,j is always zero, its outgoing connections, Wl,ij , are no longer identifiable. This corresponds to an elimination singularity. In the residual case, the only thing that changes in Equation 1 is that the factors W>k diag(f ′ k+1) on the right-hand side become W>k diag(f ′ k+1)+ I where I is an identity matrix of the appropriate size. The overlap singularities are eliminated, because xl,j′ and xl,j cannot be the same for all possible inputs in the residual case (even when the adjustable incoming weights of these units are identical). Similarly, elimination singularities are also eliminated, because xl,j cannot be identically zero for all possible inputs (even when the adjustable incoming weights of this unit are all zero), assuming that the corresponding unit at the previous layer xl−1,j is not always zero, which, in turn, is guaranteed with an identity skip connection if xl−2,j is not always zero etc., all the way down to the first hidden layer. Any linear dependence between xl,j , xl,j′ and xl,j′′ is also eliminated by adding linearly independent inputs to them, assuming again that the corresponding units in the previous layer are linearly independent. SUPPLEMENTARY NOTE 2: SIMULATION DETAILS In Figure 3, for the skip connections between non-adjacent layers in the hyper-residual networks, i.e. Qk, we used matrices of the type labeled “32” in Figure 7, i.e. matrices consisting of four copies of a set of 32 orthonormal vectors. We found that these matrices performed slightly better than orthogonal matrices. We augmented the training data in both CIFAR-10 and CIFAR-100 by adding reflected versions of each training image, i.e. their mirror images. This yields a total of 100000 training images for both datasets. The test data were not augmented, consisting of 10000 images in both cases. We used the standard splits of the data into training and test sets. For the BiasReg network of Figures 5-6, random hyperparameter search returned the following values for the target bias distributions: µ = 0.51, σ = 0.96 for CIFAR-10 and µ = 0.91, σ = 0.03 for CIFAR-100. The toy model shown in Figure 2b-c consists of the simulation of Equations 3.7 and 3.9 in Wei et al. (2008). The toy model shown in Figure 2e is the simulation of learning dynamics in a network with 3 input, 3 hidden and 3 output units, parametrized in terms of the norms and unit-vector directions of Ja−Jb−Jc, Ja+Jb−Jc, Jc, and the output weights. A teacher model with random parameters is first chosen and a large set of “training data” is generated from the teacher model. Then the gradient flow fields with respect to the two parameters m = ||Ja + Jb − Jc|| and ||Jc|| are plotted with the assumption that the remaining parameters are already at their optima (a similar assumption was made in the analysis of Wei et al. (2008)). We empirically confirmed that the flow field is generic. SUPPLEMENTARY NOTE 3: ESTIMATING THE EIGENVALUE SPECTRAL DENSITY OF THE HESSIAN IN DEEP NETWORKS We use Skilling’s moment matching method (Skilling, 1989) to estimate the eigenvalue spectra of the Hessian. We first estimate the first few non-central moments of the density by computing mk = 1 N r >Hkr where r is a random vector drawn from the standard multivariate Gaussian with zero mean and identity covariance, H is the Hessian and N is the dimensionality of the parameter space. Because the standard multivariate Gaussian is rotationally symmetric and the Hessian is a symmetric matrix, it is easy to show that mk gives an unbiased estimate of the k-th moment of the spectral density: mk = 1 N r>Hkr = 1 N N∑ i=1 r̃2iλ k i → ∫ p(λ)λkdλ as N →∞ (2) where λi are the eigenvalues of the Hessian, and p(λ) is the spectral density of the Hessian as N → ∞. In Equation 2, we make use of the fact that r̃2i are random variables with expected value 1. Despite appearances, the products in mk do not require the computation of the Hessian explicitly and can instead be computed efficiently as follows: v0 = r, vk = Hvk−1 k = 1, . . . ,K (3) where the Hessian times vector computation can be performed without computing the Hessian explicitly through Pearlmutter’s R-operator (Pearlmutter, 1994). In terms of the vectors vk, the estimates of the moments are given by the following: m2k = 1 N v>k vk, m2k+1 = 1 N v>k vk+1 (4) For the results shown in Figure 3, we use 20-layer fully-connected feedforward networks and the number of parameters is N = 709652. For the remaining simulations, we use 30-layer fullyconnected networks and the number of parameters is N = 874772. We estimate the first four moments of the Hessian and fit the estimated moments with a parametric density model. The parametric density model we use is a mixture of a narrow Gaussian distribution (to capture the bulk of the density) and a skew-normal distribution (to capture the tails): q(λ) = wSN (λ; ξ, ω, α) + (1− w)N (λ; 0, σ = 0.001) (5) with 4 parameters in total: the mixture weight w, and the location ξ, scale ω and shape α parameters of the skew-normal distribution. We fix the parameters of the Gaussian component to µ = 0 and σ = 0.001. Since the densities are heavy-tailed, the moments are dominated by the tail behavior of the model, hence the fits are not very sensitive to the precise choice of the parameters of the Gaussian component. The moments of our model can be computed in closed-form. We had difficulty fitting the parameters of the model with gradient-based methods, hence we used a simple grid search method instead. The ranges searched over for each parameter was as follows. w: logarithmically spaced between 10−9 and 10−3; α: linearly spaced between−50 and 50; ξ: linearly spaced between −10 and 10; ω: logarithmically spaced between 10−1 and 103. 100 parameters were evaluated along each parameter dimension for a total of 108 parameter configurations evaluated. The estimated moments ranged over several orders of magnitude. To make sure that the optimization gave roughly equal weight to fitting each moment, we minimized a normalized objective function: L(w,α, ξ, ω) = 4∑ k=1 |m̂k(w,α, ξ, ω)−mk| |mk| (6) where m̂k(w,α, ξ, ω) is the model-derived estimate of the k-th moment. SUPPLEMENTARY NOTE 4: VALIDATION OF THE RESULTS WITH SMALLER NETWORKS Here, we validate our main results for smaller, numerically tractable networks. The networks in this section are 10-layer fully-connected feedforward networks. The networks are trained on CIFAR100. The input dimensionality is reduced from 3072 to 128 through PCA. In what follows, we calculate the fraction of degenerate eigenvalues by counting the number of eigenvalues inside a small window of size 0.2 around 0, and the fraction of negative eigenvalues by the number of eigenvalues to the left of this window. We first compare residual networks with plain networks (Figure S1). The networks here have 16 hidden units in each layer yielding a total of 4852 parameters. This is small enough that we can calculate all eigenvalues of the Hessian numerically. We observe that residual networks have better training and test performance (Figure S1a-b); they are less degenerate (Figure S1d) and have more negative eigenvalues than plain networks (Figure S1c). These results are consistent with the results reported in Figure 3 for deeper and larger networks. Next, we validate the results reported in Figure 4 by running 400 independent plain networks and comparing the best-performing 40 with the worst-performing 40 among them (Figure S2). Again, the networks here have 16 hidden units in each layer with a total of 4852 parameters. We observe that the best networks are less degenerate (Figure S2d) and have more negative eigenvalues than the worst networks (Figure S2c). Moreover, the hidden units of the best networks have less overlap (Figure S2f), and, at least initially during training, have slightly larger weight norms than the worstperforming networks (Figure S2e). Again, these results are all consistent with those reported in Figure 4 for deeper and larger networks. Finally, using numerically tractable plain networks, we also tested whether we could reliably estimate the fractions of degenerate and negative eigenvalues with our mixture model. Just as we do for the larger networks, we first fit the mixture model to the first four moments of the spectral density estimated with the method of Skilling (1989). We then estimate the fraction of degenerate and negative eigenvalues from the fitted mixture model and compare these estimates with those obtained from the numerically calculated actual eigenvalues. Because for the larger networks, the networks were found to be highly degenerate, we restrict the analysis here to conditions where the fraction of degenerate eigenvalues was at least 99.8%. We used 10-layer plain networks with 32 hidden 0 150 300 Epoch 0 40 T ra in in g  ac cu ra cy  ( % ) a Plain Residual 0 150 300 Epoch 0 40 T es t a cc ur ac y  (% ) b 0 150 300 Epoch 0 6 N eg at iv e  ei gs . ( % ) c 0 150 300 Epoch 75 100 D eg en er at e  ei gs . ( % ) d Figure S1: Validation of the results with 10-layer plain and residual networks trained on CIFAR-100. (a-b) Training and test accuracy. (c-d) Fraction of negative and degenerate eigenvalues throughout training. The results are averages over 4 independent runs ±1 standard errors. 0 15 30 0 35a Training accuracy (%) Best 40 Worst 40 0 15 30 0 35b Test accuracy (%) 0 15 30 0 5c Negative eigs. (%) 0 15 30 75 100d Degenerate eigs. (%) 0 15 30 Epoch 1 1.2e Mean norm 0 15 30 Epoch 0.0075 0.005 0.0025 0 0.0025 f Mean overlap Figure S2: Validation of the results with 400 10-layer plain networks with 16 hidden units in each layer (4852 parameters total) trained on CIFAR-100. We compare the best 40 networks with the worst 40 networks, as in Figure 4. (a-b) Training and test accuracy. (c-d) Fraction of negative and degenerate eigenvalues throughout training. Better performing networks are less degenerate and have more negative eigenvalues. (e) Mean norms of the incoming weight vectors of the hidden units. (f) Mean overlaps of the hidden units as measured by the mean correlation between their incoming weight vectors. The results are averages over 40 best or worst runs ±1 standard errors. 99.75 99.80 99.85 99.90 99.95 100 Actual 99.75 99.80 99.85 99.90 99.95 100 E st im at ed Degenerate eigs. (%) 0 0.01 0.02 0.03 0.04 0.05 Actual 0 0.01 0.02 0.03 0.04 0.05 E st im at ed Negative eigs. (%) Figure S3: For 10-layer plain networks with 32 hidden units in each layer (14292 parameters total), estimates obtained from the mixture model slightly underestimate the fraction of degenerate eigenvalues, and overestimate the fraction of negative eigenvalues; however, there is a highly significant linear relationship between the actual values and the estimates. (a) Actual vs. estimated fraction of degenerate eigenvalues. (b) Actual vs. estimated fraction of negative eigenvalues for the same networks. Dashed line shows the identity line. Dots and errorbars represent means and standard errors of estimates in different bins; the solid lines and the shaded regions represent the linear regression fits and the 95% confidence intervals. units in each layer (with a total of 14292 parameters) for this analysis. We observe that, at least for these small networks, the mixture model usually underestimates the fraction of degenerate eigenvalues and overestimates the fraction of negative eigenvalues. However, there is a highly significant positive correlation between the actual and estimated fractions (Figure S3). SUPPLEMENTARY NOTE 5: DYNAMICS OF LEARNING IN LINEAR NETWORKS WITH SKIP CONNECTIONS To get a better analytic understanding of the effects of skip connections on the learning dynamics, we turn to linear networks. In an L-layer linear plain network, the input-output mapping is given by (again ignoring the biases for simplicity): xL = WL−1WL−2 . . .W1x1 (7) where x1 and xL are the input and output vectors, respectively. In linear residual networks with identity skip connections between adjacent layers, the input-output mapping becomes: xL = (WL−1 + I)(WL−2 + I) . . . (W1 + I)x1 (8) Finally, in hyper-residual linear networks where all skip connection matrices are assumed to be the identity, the input-output mapping is given by: xL = ( WL−1 + (L− 1)I )( WL−2 + (L− 2)I ) . . . ( W1 + I ) x1 (9) In the derivations to follow, we do not have to assume that the connectivity matrices are square matrices. If they are rectangular matrices, the identity matrix I should be interpreted as a rectangular identity matrix of the appropriate size. This corresponds to zero-padding the layers when they are not the same size, as is usually done in practice. Three-layer networks: Dynamics of learning in plain linear networks with no skip connections was analyzed in Saxe et al. (2013). For a three-layer network (L = 3), the learning dynamics can be expressed by the following differential equations (Saxe et al., 2013): τ d dt aα = (sα − aα · bα)bα − ∑ γ 6=α (aα · bγ)bγ (10) τ d dt bα = (sα − aα · bα)aα − ∑ γ 6=α (aγ · bα)aγ (11) Here aα and bα are n-dimensional column vectors (where n is the number of hidden units) connecting the hidden layer to the α-th input and output modes, respectively, of the input-output correlation matrix and sα is the corresponding singular value (see Saxe et al. (2013) for further details). The first term on the right-hand side of Equations 10-11 facilitates cooperation between aα and bα corresponding to the same input-output mode α, while the second term encourages competition between vectors corresponding to different modes. In the simplest scenario where there are only two input and output modes, the learning dynamics of Equations 10, 11 reduces to: d dt a1 = (s1 − a1 · b1)b1 − (a1 · b2)b2 (12) d dt a2 = (s2 − a2 · b2)b2 − (a2 · b1)b1 (13) d dt b1 = (s1 − a1 · b1)a1 − (a1 · b2)a2 (14) d dt b2 = (s2 − a2 · b2)a2 − (a2 · b1)a1 (15) How does adding skip connections between adjacent layers change the learning dynamics? Considering again a three-layer network (L = 3) with only two input and output modes, a straightforward extension of Equations 12-15 shows that the learning dynamics changes as follows: d dt a1 = [ s1 − (a1 + v1) · (b1 + u1) ] (b1 + u1)− [ (a1 + v1) · (b2 + u2) ] (b2 + u2) (16) d dt a2 = [ s2 − (a2 + v2) · (b2 + u2) ] (b2 + u2)− [ (a2 + v2) · (b1 + u1) ] (b1 + u1) (17) d dt b1 = [ s1 − (a1 + v1) · (b1 + u1) ] (a1 + v1)− [ (a1 + v1) · (b2 + u2) ] (a2 + v2) (18) d dt b2 = [ s2 − (a2 + v2) · (b2 + u2) ] (a2 + v2)− [ (a2 + v2) · (b1 + u1) ] (a1 + v1) (19) where u1 and u2 are orthonormal vectors (similarly for v1 and v2). The derivation proceeds essentially identically to the corresponding derivation for plain networks in Saxe et al. (2013). The only differences are: (i) we substitute the plain weight matrices Wl with their residual counterparts Wl + I and (ii) when changing the basis from the canonical basis for the weight matrices W1, W2 to the input and output modes of the input-output correlation matrix, U and V, we note that: W2 + I = UW2 + UU > = U(W2 + U >) (20) W1 + I = W1V > + VV> = (W1 + V)V > (21) where U and V are orthogonal matrices and the vectors aα, bα, uα and vα in Equations 16-19 correspond to the α-th columns of the matrices W1, W > 2 , U and V, respectively. Figure S4 shows, for two different initializations, the evolution of the variables a1 and a2 in plain and residual networks with two input-output modes and two hidden units. When the variables are initialized to small random values, the dynamics in the plain network initially evolves slowly (Figure S4a, blue); whereas it is much faster in the residual network (Figure S4a, red). This effect is attributable to two factors. First, the added orthonormal vectors uα and vα increase the initial velocity of the variables in the residual network. Second, even when we equalize the initial norms of the vectors, aα and aα + vα (and those of the vectors bα and bα + uα) in the plain and the residual networks, respectively, we still observe an advantage for the residual network (Figure S4b), because the cooperative and competitive terms are orthogonal to each other in the residual network (or close to orthogonal, depending on the initialization of aα and bα; see right-hand side of Equations 16-19), whereas in the plain network they are not necessarily orthogonal and hence can cancel each other (Equations 12-15), thus slowing down convergence. Singularity of the Hessian in linear three-layer networks: The dynamics in Equations 10, 11 can be interpreted as gradient descent on the following energy function: E = 1 2τ ∑ α (sα − aα · bα)2 + 1 2τ ∑ α 6=β (aα · bβ)2 (22) 0 200 400 Time -1.5 0 1.5 a 1 a plain residual 0 200 400 Time -1.5 0 1.5 a 2 0 200 400 Time 0 1 2 a 1 b 0 200 400 Time -0.5 0 1 a 2 Figure S4: Evolution of a1 and a2 in linear plain and residual networks (evolution of b1 and b2 proceeds similarly). The weights converge faster in residual networks. Simulation details are as follows: the number of hidden units is 2 (the two solid lines for each color represent the weights associated with the two hidden nodes, e.g. a11 and a 1 2 on the left), the singular values are s1 = 3.0, s2 = 1.5. For the residual network, u1 = v1 = [1/ √ 2, 1/ √ 2]> and u2 = v2 = [1/ √ 2,−1/ √ 2]>. In (a), the weights of both plain and residual networks are initialized to random values drawn from a Gaussian with zero mean and standard deviation of 0.0001. The learning rate was set to 0.1. In (b), the weights of the plain network are initialized as follows: the vectors a1 and a2 are initialized to [1/ √ 2, 1/ √ 2]> and the vectors b1 and b2 are initialized to [1/ √ 2,−1/ √ 2]>; the weights of the residual network are all initialized to zero, thus equalizing the initial norms of the vectors aα and aα + vα (and those of the vectors bα and bα + uα) between the plain and residual networks. The residual network still converges faster than the plain network. In (b), the learning rate was set to 0.01 to make the different convergence rates of the two networks more visible. This energy function is invariant to a (simultaneous) permutation of the elements of the vectors aα and bα for all α. This causes degenerate manifolds in the landscape. Specifically, for the permutation symmetry of hidden units, these manifolds are the hyperplanes aαi = a α j ∀α, for each pair of hidden units i, j (similarly, the hyperplanes bαi = b α j ∀α) that make the model non-identifiable. Formally, these correspond to the singularities of the Hessian or the Fisher information matrix. Indeed, we shall quickly check below that when aαi = a α j ∀α for any pair of hidden units i, j, the Hessian becomes singular (overlap singularities). The Hessian also has additional singularities at the hyperplanes aαi = 0 ∀α for any i and at bαi = 0 ∀α for any i (elimination singularities). Starting from the energy function in Equation 22 and taking the derivative with respect to a single input-to-hidden layer weight, aαi : ∂E ∂aαi = −(sα − aα · bα)bαi + ∑ β 6=α (aα · bβ)bβi (23) and the second derivatives are as follows: ∂2E ∂(aαi ) 2 = (bαi ) 2 + ∑ β 6=α (bβi ) 2 = ∑ β (bβi ) 2 (24) ∂2E ∂aαi ∂a α j = bαj b α i + ∑ β 6=α bβj b β i = ∑ β bβi b β j (25) Note that the second derivatives are independent of mode index α, reflecting the fact that the energy function is invariant to a permutation of the mode indices. Furthermore, when bβi = b β j for all β, the columns in the Hessian corresponding to aαi and a α j become identical, causing an additional degeneracy reflecting the non-identifiability of aαi and a α j . A similar derivation establishes that aβi = a β j for all β also leads to a degeneracy in the Hessian, this time reflecting the non-identifiability of bαi and b α j . These correspond to the overlap singularities. In addition, it is easy to see from Equations 24, 25 that when bαi = 0 ∀α, the right-hand sides of both equations become identically zero, reflecting the non-identifiability of aαi for all α. A similar derivation shows that when aαi = 0 ∀α, the columns of the Hessian corresponding to bαi become identically zero for all α, this time reflecting the non-identifiability of bαi for all α. These correspond to the elimination singularities. When we add skip connections between adjacent layers, i.e. in the residual architecture, the energy function changes as follows: E = 1 2 ∑ α (sα − (aα + vα) · (bα + uα))2 + 1 2 ∑ α6=β ((aα + vα) · (bβ + uβ))2 (26) and straightforward algebra yields the following second derivatives: ∂2E ∂(aαi ) 2 = ∑ β (bβi + u β i ) 2 (27) ∂2E ∂aαi ∂a α j = ∑ β (bβi + u β i )(b β j + u β j ) (28) Unlike in the plain network, setting bβi = b β j for all β, or setting b α i = 0 ∀α, does not lead to a degeneracy here, thanks to the orthogonal skip vectors uβ . However, this just shifts the locations of the singularities. In particular, the residual network suffers from the same overlap and elimination singularities as the plain network when we make the following change of variables: bβ → bβ − uβ and aβ → aβ − vβ . Networks with more than three-layers: As shown in Saxe et al. (2013), in linear networks with more than a single hidden layer, assuming that there are orthogonal matrices Rl and Rl+1 for each layer l that diagonalize the initial weight matrix of the corresponding layer (i.e. R>l+1Wl(0)Rl = Dl is a diagonal matrix), dynamics of different singular modes decouple from each other and each -2 0 2 a -2 0 2 b a Plain -2 0 2 -2 0 2 Residual -2 0 2 -2 0 2 Hyper−residual 0 200 400 Time 0 1 2 u b hyper-residual residual plain Figure S5: (a) Phase portraits for three-layer plain, residual and hyper-residual linear networks. (b) Evolution of u = ∏Nl−1 l=1 al for 10-layer plain, residual and hyper-residual linear networks. In the plain network, u did not converge to its asymptotic value s within the simulated time window. mode α evolves according to gradient descent dynamics in an energy landscape described by (Saxe et al., 2013): Eplain = 1 2τ ( sα − Nl−1∏ l=1 aαl )2 (29) where aαl can be interpreted as the strength of mode α at layer l and Nl is the total number of layers. In residual networks, assuming further that the orthogonal matrices Rl satisfy R>l+1Rl = I, the energy function changes to: Eres = 1 2τ ( sα − Nl−1∏ l=1 (aαl + 1) )2 (30) and in hyper-residual networks, it is: Ehyperres = 1 2τ ( sα − Nl−1∏ l=1 (aαl + l) )2 (31) Figure S5a illustrates the effect of skip connections on the phase portrait of a three layer network. The two axes, a and b, represent the mode strength variables for l = 1 and l = 2, respectively: i.e. a ≡ aα1 and b ≡ aα2 . The plain network has a saddle point at (0, 0) (Figure S5a; left). The dynamics around this point is slow, hence starting from small random values causes initially very slow learning. The network funnels the dynamics through the unstable manifold a = b to the stable hyperbolic solution corresponding to ab = s. Identity skip connections between adjacent layers in the residual architecture move the saddle point to (−1,−1) (Figure S5a; middle). This speeds up the dynamics around the origin, but not as much as in the hyper-residual architecture where the saddle point is moved further away from the origin and the main diagonal to (−1,−2) (Figure S5a; right). We found these effects to be more pronounced in deeper networks. Figure S5b shows the dynamics of learning in 10-layer linear networks, demonstrating a clear advantage for the residual architecture over the plain architecture and for the hyper-residual architecture over the residual architecture. Singularity of the Hessian in reduced linear multilayer networks with skip connections: The derivative of the cost function of a linear multilayer residual network (Equation 30) with respect to the mode strength variable at layer i, ai, is given by (suppressing the mode index α and taking τ = 1): ∂E ∂ai = −(s− u) ∏ l 6=i (al + 1) (32) and the second derivatives are: ∂2E ∂a2i = [∏ l 6=i (al + 1) ]2 (33) ∂2E ∂ai∂ak = [ 2 ∏ l (al + 1)− s ] ∏ l 6=i,k (al + 1) (34) It is easy to check that the columns (or rows) corresponding to ai and aj in the Hessian become identical when ai = aj , making the Hessian degenerate. The hyper-residual architecture does not eliminate these degeneracies but shifts them to different locations in the parameter space by adding distinct constants to ai and aj (and to all other variables). SUPPLEMENTARY NOTE 6: DESIGNING SKIP CONNECTIVITY MATRICES WITH VARYING DEGREES OF ORTHOGONALITY AND WITH EIGENVALUES ON THE UNIT CIRCLE We generated the covariance matrix of the eigenvectors by S = QΛQ>, where Q is a random orthogonal matrix and Λ is the diagonal matrix of eigenvalues, Λii = exp(−τ(i−1)), as explained in the main text. We find the correlation matrix through R = D−1/2SD−1/2 where D is the diagonal matrix of the variances: i.e. Dii = Sii. We take the Cholesky decomposition of the correlation matrix, R = TT>. Then the designed skip connectivity matrix is given by Σ = TULU−1T−1, where L and U are the matrices of eigenvalues and eigenvectors of another randomly generated orthogonal matrix, O: i.e. O = ULU>. With this construction, Σ has the same eigenvalue spectrum as O, however the eigenvectors of Σ are linear combinations of the eigenvectors of O such that their correlation matrix is given by R. Thus, the eigenvectors of Σ are not orthogonal to each other unless τ = 0. Larger values of τ yield more correlated, hence less orthogonal, eigenvectors. SUPPLEMENTARY NOTE 7: VALIDATION OF THE RESULTS WITH SHALLOW NETWORKS To further demonstrate the generality of our results and the independence of the problem of singularities from the vanishing gradients problem in optimization, we performed an experiment with shallow plain and residual networks with only two hidden layers and 16 units in each hidden layer. Because we do not allow skip connections from the input layer, a network with two hidden layers is the shallowest network we can use to compare the plain and residual architectures. Figure S6 shows the results of this experiment. The residual network performs slightly better both on the training and test data (Figure S6a-b); it is less degenerate (Figure S6d) and has more negative eigenvalues (Figure S6c); it has larger gradients (Figure S6e) —note that the gradients in the plain network do not vanish even at the beginning of training— and its hidden units have less overlap than the plain network (Figure S6f). Moreover, the gradient norms closely track the mean overlap between the hidden units and the degeneracy of the network (Figure S6d-f) throughout training. These results suggest that the degeneracies caused by the overlaps of hidden units slow down learning, consistent with our symmetry-breaking hypothesis and with the results from larger networks. 0 40 T ra in in g  ac c.  ( % ) a Plain Residual 0 40 T es t a cc . ( % ) b 0 5 N eg at iv e  ei gs . ( % ) c 75 100 D eg en er at e  ei gs . ( % ) d 0 5 10 15 20 25 Epoch 0 0.0015 M ea n  gr ad . n or m e Layer 1 Layer 2 0 5 10 15 20 25 Epoch 0.02 0 0.02 M ea n  ov er la p f Figure S6: Main results hold for two-layer shallow nets trained on CIFAR-100. (a-b) Training and test accuracies. Residual nets perform slightly better. (c-d) Fraction of negative and degenerate eigenvalues. Residual nets are less degenerate. (e) Mean gradient norms with respect to the two layer activations throughout training. (f) Mean overlap for the second hidden layer units, measured as the mean correlation between the incoming weights of the hidden units. Results in (a-e) are averages over 16 independent runs; error bars are small, hence not shown for clarity. In (f), error bars represent standard errors. 0 10 20 Epoch 0 50 100 T r.  a cc ur ac y  (% )a 30 layers (SVHN) Best 10 Worst 10 0 10 20 Epoch 99.99 100 D eg . e ig s.  ( % ) 0 10 20 Epoch 1 1.1 1.2 M ea n  no rm 0 10 20 Epoch 0 0.01 M ea n  ov er la p 0 10 20 Epoch 0.5 1 Li n.  d ep en de nc e 0 25 50 Epoch 0 50 100 T r.  a cc ur ac y  (% )b 30 layers (STL10) Best 10 Worst 10 0 25 50 Epoch 99.999 100 D eg . e ig s.  ( % ) 0 25 50 Epoch 1 1.1 M ea n  no rm 0 25 50 Epoch 0 0.02 M ea n  ov er la p 0 25 50 Epoch 0.5 1 Li n.  d ep en de nc e Figure S7: Replication of the results reported in Figure 4 for (a) the Street View House Numbers (SVHN) dataset and (b) the STL-10 dataset.
1. What are the main contributions of the paper regarding skip connections in deep networks? 2. What are the perceived difficulties in training deep networks that the paper addresses? 3. How do skip connections help alleviate these difficulties, and what are the experimental results supporting this claim? 4. How does the reviewer assess the significance and timeliness of the paper's topic? 5. Are there any concerns or suggestions for improvement regarding the presentation of the paper's arguments and experiments?
Review
Review Paper examines the use of skip connections (including residual layers) in deep networks as a way of alleviating two perceived difficulties in training: 1) when a neuron does not contain any information, and 2) when two neurons in a layer compute the same function. Both of these cases lead to singularities in the Hessian matrix, and this work includes a number of experiments showing the effect of skip connections on the Hessian during training. This is a significant and timely topic. While I may not be the best one to judge the originality of this work, I appreciated how the authors presented clear and concise arguments with experiments to back up their claims.
ICLR
Title Skip Connections Eliminate Singularities Abstract Skip connections made the training of very deep networks possible and have become an indispensable component in a variety of neural architectures. A completely satisfactory explanation for their success remains elusive. Here, we present a novel explanation for the benefits of skip connections in training very deep networks. The difficulty of training deep networks is partly due to the singularities caused by the non-identifiability of the model. Several such singularities have been identified in previous works: (i) overlap singularities caused by the permutation symmetry of nodes in a given layer, (ii) elimination singularities corresponding to the elimination, i.e. consistent deactivation, of nodes, (iii) singularities generated by the linear dependence of the nodes. These singularities cause degenerate manifolds in the loss landscape that slow down learning. We argue that skip connections eliminate these singularities by breaking the permutation symmetry of nodes, by reducing the possibility of node elimination and by making the nodes less linearly dependent. Moreover, for typical initializations, skip connections move the network away from the “ghosts” of these singularities and sculpt the landscape around them to alleviate the learning slow-down. These hypotheses are supported by evidence from simplified models, as well as from experiments with deep networks trained on real-world datasets. 1 INTRODUCTION Skip connections are extra connections between nodes in different layers of a neural network that skip one or more layers of nonlinear processing. The introduction of skip (or residual) connections has substantially improved the training of very deep neural networks (He et al., 2015; 2016; Huang et al., 2016; Srivastava et al., 2015). Despite informal intuitions put forward to motivate skip connections, a clear understanding of how these connections improve training has been lacking. Such understanding is invaluable both in its own right and for the possibilities it might offer for further improvements in training very deep neural networks. In this paper, we attempt to shed light on this question. We argue that skip connections improve the training of deep networks partly by eliminating the singularities inherent in the loss landscapes of deep networks. These singularities are caused by the non-identifiability of subsets of parameters when nodes in the network either get eliminated (elimination singularities), collapse into each other (overlap singularities) (Wei et al., 2008), or become linearly dependent (linear dependence singularities). Saad & Solla (1995); Amari et al. (2006); Wei et al. (2008) identified the elimination and overlap singularities and showed that they significantly slow down learning in shallow networks; Saxe et al. (2013) showed that linear dependence between nodes arises generically in randomly initialized deep linear networks and becomes more severe with depth. We show that skip connections eliminate these singularities and provide evidence suggesting that they improve training partly by ameliorating the learning slow-down caused by the singularities. 2 RESULTS 2.1 SINGULARITIES IN FULLY-CONNECTED LAYERS AND HOW SKIP CONNECTIONS BREAK THEM In this work, we focus on three types of singularity that arise in fully-connected layers: elimination and overlap singularities (Amari et al., 2006; Wei et al., 2008), and linear dependence singularities Ja =Jb Ja =Jb =αJ +βJa bJ=0 J=0 identifiable? yesno ... ww =0 w=0 J J ... bw wa wb ... ... w wa ... Elimination singularities a b c Overlap singularity ... skip Linear dependence singularity Ja Jc wcwa wb Ja Jc JbJb wcwa wb Figure 1: Singularities in a fully connected layer and how skip connections break them. (a) In elimination singularities, zero incoming weights, J = 0, eliminate units and make outgoing weights, w, non-identifiable (red). Skip connections (blue) ensure units are active at least sometimes, so the outgoing weights are identifiable (green). The reverse holds for zero outgoing weights, w = 0: skip connections recover identifiability for J . (b) In overlap singularities, overlapping incoming weights, Ja = Jb, make outgoing weights non-identifiable; skip connections again break the degeneracy. (c) In linear dependence singularities, a subset of the hidden units become linearly dependent, making their outgoing weights non-identifiable; skip connections break the linear dependence. (Saxe et al., 2013). The linear dependence singularities can arise exactly only in linear networks, whereas the elimination and overlap singularities can arise in non-linear networks as well. These singularities are all related to the non-identifiability of the model. The Hessian of the loss function becomes singular at these singularities (Supplementary Note 1), hence they are sometimes also called degenerate or higher-order saddles (Anandkumar & Ge, 2016). Elimination singularities arise when a hidden unit is effectively killed, e.g. when its incoming (or outgoing) weights become zero (Figure 1a). This makes the outgoing (or incoming) connections of the unit non-identifiable. Overlap singularities are caused by the permutation symmetry of the hidden units at a given layer and they arise when two units become identical, e.g. when their incoming weights become identical (Figure 1b). In this case, the outgoing connections of the units are no longer identifiable individually (only their sum is identifiable). Linear dependence singularities arise when a subset of the hidden units in a layer become linearly dependent (Figure 1c). Again, the outgoing connections of these units are no longer identifiable individually (only a linear combination of them is identifiable). How do skip connections eliminate these singularities? Skip connections between adjacent layers break the elimination singularities by ensuring that the units are active at least for some inputs, even when their adjustable incoming or outgoing connections become zero (Figure 1a; right). They eliminate the overlap singularities by breaking the permutation symmetry of the hidden units at a given layer (Figure 1b; right). Thus, even when the adjustable incoming weights of two units become identical, the units do not collapse into each other, since their distinct skip connections still disambiguate them. They also eliminate the linear dependence singularities by adding linearly independent (in fact, orthogonal in most cases) inputs to the units (Figure 1c; right). 2.2 WHY ARE SINGULARITIES HARMFUL FOR LEARNING? The effect of elimination and overlap singularities on gradient-based learning has been analyzed previously for shallow networks (Amari et al., 2006; Wei et al., 2008). Figure 2a shows the simplified two hidden unit model analyzed in Wei et al. (2008) and its reduction to a two-dimensional system in terms of the overlap and elimination variables, h and z. Both types of singularity cause degenerate manifolds in the loss landscape, represented by the lines h = 0 and z = ±1 in Figure 2b, corresponding to the overlap and elimination singularities respectively. The elimination manifolds divide the overlap manifolds into stable and unstable segments. According to the analysis presented in Wei et al. (2008), these manifolds give rise to two types of plateaus in the learning dynamics: on-singularity plateaus which are caused by the random walk behavior of stochastic gradient descent (SGD) along a stable segment of the overlap manifolds (thick segment on the h = 0 line in Figure 2b) until it escapes the stable segment, and (more relevant in practical cases) near-singularity plateaus which manifest themselves as a general slowing of the dynamics near the overlap manifolds, even when the initial location is not within the basin of attraction of the stable segment. Although this analysis only holds for two hidden units, for higher dimensional cases, it suggests that overlaps between hidden units significantly slow down learning along the overlap directions. These overlap directions become more numerous as the number of hidden units increases, thus reducing the effective dimensionality of the model. We provide empirical evidence for this claim below. As mentioned earlier, linear dependence singularities arise exactly only in linear networks. However, we expect them to hold approximately, and thus have consequences for learning, in the non-linear case as well. Figure 2d-e shows an example in a toy single-layer nonlinear network: learning along a linear dependence manifold, represented by m here, is much slower than learning along other directions, e.g. the norm of the incoming weight vector Jc in the example shown here. Saxe et al. (2013) demonstrated that this linear dependence problem arises generically, and becomes worse with depth, in randomly initialized deep linear networks. Because learning is significantly slowed down along linear dependence directions compared to other directions, these singularities effectively reduce the dimensionality of the model, similarly to the overlap manifolds. 2.3 PLAIN NETWORKS ARE MORE DEGENERATE THAN NETWORKS WITH SKIP CONNECTIONS To investigate the relationship between degeneracy, training difficulty and skip connections in deep networks, we conducted several experiments with deep fully-connected networks. We compared three different architectures. (i) The plain architecture is a fully-connected feedforward network with no skip connections, described by the equation: xl+1 = f(Wlxl + bl+1) l = 0, . . . , L− 1 where f is the ReLU nonlinearity and x0 denotes the input layer. (ii) The residual architecture introduces identity skip connections between adjacent layers (note that we do not allow skip connections from the input layer): x1 = f(W0x0 + b1), xl+1 = f(Wlxl + bl+1) + xl l = 1, . . . , L− 1 (iii) The hyper-residual architecture adds skip connections between each layer and all layers above it: x1 = f(W0x0+b1), x2 = f(W1x1+b2)+x1, xl+1 = f(Wlxl+bl+1)+xl+ 1 l − 1 l−1∑ k=1 Qkxk l = 2, . . . , L−1 The skip connectivity from the immediately preceding layer is always the identity matrix, whereas the remaining skip connections Qk are fixed, but allowed to be different from the identity (see Supplementary Note 2 for further details). This architecture is inspired by the DenseNet architecture (Huang et al., 2016). In both architectures, each layer projects skip connections to layers above it. However, in the DenseNet architecture, the skip connectivity matrices are learned, whereas in the hyper-residual architecture considered here, they are fixed. In the experiments of this subsection, the networks all had L = 20 hidden layers (followed by a softmax layer at the top) and n = 128 hidden units (ReLU) in each hidden layer. Hence, the networks had the same total number of parameters. The biases were initialized to 0 and the weights were initialized with the Glorot normal initialization scheme (Glorot & Bengio, 2010). The networks were trained on the CIFAR-100 dataset (with coarse labels) using the Adam optimizer (Kingma & Ba, 2014) with learning rate 0.0005 and a batch size of 500. Because we are mainly interested in understanding how singularities, and their removal, change the shape of the loss landscape and consequently affect the optimization difficulty, we primarily monitor the training accuracy rather than test accuracy in the results reported below. To measure degeneracy, we estimated the eigenvalue density of the Hessian during training for the three different network architectures. The probability of small eigenvalues in the eigenvalue density reflects the dimensionality of the degenerate parameter space. To estimate this eigenvalue density in our ∼ 1M-dimensional parameter spaces, we first estimated the first four moments of the spectral density using the method of Skilling (Skilling, 1989) and fit the estimated moments with a flexible mixture density model (see Supplementary Note 3 for details) consisting of a narrow Gaussian component to capture the bulk of the spectral density, and a skew Gaussian density to capture the tails (see Figure 3d for example fits). From the fitted mixture density, we estimated the fraction of degenerate eigenvalues and the fraction of negative eigenvalues during training. We validated our main results, as well as our mixture model for the spectral density, with smaller networks with ∼ 14K parameters where we could calculate all eigenvalues of the Hessian numerically (Supplementary Note 4). For these smaller networks, the mixture model slightly underestimated the fraction of degenerate eigenvalues and overestimated the fraction of negative eigenvalues; however, there was a highly significant linear relationship between the actual and estimated fractions. Figure 3b shows the evolution of the fraction of degenerate eigenvalues during training. A large value at a particular point during optimization indicates a more degenerate model. By this measure, the hyper-residual architecture is the least degenerate and the plain architecture is the most degenerate. We observe the opposite pattern for the fraction of negative eigenvalues (Figure 3c). The differences between the architectures are more prominent early on in the training and there is an indication of a crossover later during training, with less degenerate models early on becoming slightly more degenerate later on as the training performance starts to saturate (Figure 3b). Importantly, the hyper-residual architecture has the highest training speed and the plain architecture has the lowest training speed (Figure 3a), consistent with our hypothesis that the degeneracy of a model increases the training difficulty and skip connections reduce the degeneracy. 2.4 TRAINING ACCURACY IS RELATED TO DISTANCE FROM DEGENERATE MANIFOLDS To establish a more direct relationship between the elimination, overlap and linear dependence singularities discussed earlier on the one hand, and model degeneracy and training difficulty on the other, we exploited the natural variability in training the same model caused by the stochasticity of stochastic gradient descent (SGD) and random initialization. Specifically, we trained 100 plain networks (30 hidden layers, 128 neurons per layer) on CIFAR-100 using different random initializations and random mini-batch selection. Training performance varied widely across runs. We compared the best 10 and the worst 10 runs (measured by mean accuracy over 100 training epochs, Figure 4a). The worst networks were more degenerate (Figure 4b); they were significantly closer to elimination singularities, as measured by the average l2-norm of the incoming weights of their hidden units (Figure 4c); they were significantly closer to overlap singularities (Figure 4d), as measured by the mean correlation between the incoming weights of their hidden units; and their hidden units were significantly more linearly dependent (Figure 4e), as measured by the mean variance explained by the top three eigenmodes of the covariance matrices of the hidden units in the same layer. 2.5 BENEFITS OF SKIP CONNECTIONS AREN’T EXPLAINED BY GOOD INITIALIZATION ALONE To investigate if the benefits of skip connections can be explained in terms of favorable initialization of the parameters, we introduced a malicious initialization scheme for the residual network by subtracting the identity matrix from the initial weight matrices, Wl. If the benefits of skip connections can be explained primarily by favorable initialization, this malicious initialization would be expected to cancel the effects of skip connections at initialization and hence significantly deteriorate the performance. However, the malicious initialization only had a small adverse effect on the performance of the residual network (Figure 5; ResMalInit), suggesting that the benefits of skip connections cannot be explained by favorable initialization alone. This result reveals a fundamental weakness in previous explanations of the benefits of skip connections based purely on linear models (Hardt & Ma, 2016; Li et al., 2016). In Supplementary Note 5 we show that skip connections do not eliminate the singularities in deep linear networks, but only shift the landscape so that typical initializations are farther from the singularities. Thus, in linear networks, any benefits of skip connections are due entirely to better initialization. In contrast, skip connections genuinely eliminate the singularities in nonlinear networks (Supplementary Note 1). The fact that malicious initialization of the residual network does reduce its performance suggests that “ghosts” of these singularities still exist in the loss landscape of nonlinear networks, but the performance reduction is only slight, suggesting that skip connections alter the landscape around these ghosts to alleviate the learning slow-down that would otherwise take place near them. 2.6 ALTERNATIVE WAYS OF ELIMINATING THE SINGULARITIES If the success of skip connections can be attributed, at least partly, to eliminating singularities, then alternative ways of eliminating them should also improve training. We tested this hypothesis by introducing a particularly simple way of eliminating singularities: for each layer we drew random target biases from a Gaussian distribution, N (µ, σ), and put an l2-norm penalty on learned biases deviating from those targets. This breaks the permutation symmetry between units and eliminates the overlap singularities. In addition, positive µ values decrease the average threshold of the units and make the elimination of units less likely (but not impossible), hence reducing the elimination singularities. Decreased thresholds can also increase the dimensionality of the responses in a given layer by reducing the fraction of times different units are identically zero, thereby making them less linearly dependent. Note that setting µ = 0 and σ = 0 corresponds to the standard l2-norm regularization of the biases, which does not eliminate any of the overlap or elimination singularities. Hence, we expect the performance to be worse in this case than in cases with properly eliminated singularities. On the other hand, although in general, larger values of µ and σ correspond to greater elimination of singularities, the network also has to perform well in the classification task and very large µ, σ values might be inconsistent with the latter requirement. Therefore, we expect the performance to be optimal for intermediate values of µ and σ. In the experiments reported below, we optimized the hyperparameters µ, σ, and λ, i.e. the mean and the standard deviation of the target bias distribution and the strength of the bias regularization term, through random search (Bergstra & Bengio, 2012). We trained 30-layer fully-connected feedforward networks on CIFAR-10 and CIFAR-100 datasets. Figure 5a-b shows the training accuracy of different models on the two datasets. For both datasets, among the models shown in Figure 5, the residual network performs the best and the plain network the worst. Our simple singularity elimination through bias regularization scheme (BiasReg, cyan) significantly improves performance over the plain network. Importantly, the standard l2-norm regularization on the biases (BiasL2Reg (µ = 0, σ = 0), magenta) does not improve performance over the plain network. These results are consistent with the singularity elimination hypothesis. There is still a significant performance gap between our BiasReg network and the residual network despite the fact that both break degeneracies. This can be partly attributed to the fact that the residual network breaks the degeneracies more effectively than the BiasReg network (Figure 5c). Secondly, even in models that completely eliminate the singularities, the learning speed would still depend on the behavior of the gradient norms, and the residual network fares better than the BiasReg network in this respect as well. At the beginning of training, the gradient norms with respect to the layer activities do not diminish in earlier layers of the residual network (Figure 6a, Epoch 0), demonstrating that it effectively solves the vanishing gradients problem (Hochreiter, 1991; Bengio et al., 1994). On the other hand, both in the plain network and in the BiasReg network, the gradient norms decay quickly as one descends from the top of the network. Moreover, as training progresses (Figure 6a, Epochs 1 and 2), the gradient norms are larger for the residual network than for the plain or the BiasReg network. Even for the maliciously initialized residual network, gradients do not decay quickly at the beginning of training and the gradient norms behave similarly to those of the residual network during training (Figure 6a; ResMalInit), suggesting that skip connections boost the gradient norms near the ghosts of the singularities and reduce the learning slow-down that would otherwise take place near them. Adding a single batch normalization layer (Ioffe & Szegedy, 2015) in the middle of the BiasReg network alleviates the vanishing gradients problem for this network and brings its performance closer to that of the residual network (Figure 6a-b; BiasReg+BN). 2.7 NON-IDENTITY SKIP CONNECTIONS If the singularity elimination hypothesis is correct, there should be nothing special about identity skip connections. Skip connections other than identity should lead to training improvements if they eliminate singularities. For the permutation symmetry breaking of the hidden units, ideally the skip connection vector for each unit should disambiguate that unit maximally from all other units in that layer. This is because as shown by the analysis in Wei et al. (2008) (Figure 2), even partial overlaps between hidden units significantly slow down learning (near-singularity plateaus). Mathematically, the maximal disambiguation requirement corresponds to an orthogonality condition on the skip connectivity matrix (any full-rank matrix breaks the permutation symmetry, but only orthogonal matrices maximally disambiguate the units). Adding orthogonal vectors to different hidden units is also useful for breaking potential (exact or approximate) linear dependencies between them. We therefore tested random dense orthogonal matrices as skip connectivity matrices. Random dense orthogonal matrices performed slightly better than identity skip connections in both CIFAR-10 and CIFAR-100 datasets (Figure 7a, black vs. blue). This is because, even with skip connections, units can be deactivated for some inputs because of the ReLU nonlinearity (recall that we do not allow skip connections from the input layer). When this happens to a single unit at layer l, that unit is effectively eliminated for that subset of inputs, hence eliminating the skip connection to the corresponding unit at layer l+1, if the skip connectivity is the identity. This causes a potential elimination singularity for that particular unit. With dense skip connections, however, this possibility is reduced, since all units in the previous layer are used. Moreover, when two distinct units at layer l are deactivated together, the identity skips cannot disambiguate the corresponding units at the next layer, causing a potential overlap singularity. On the other hand, with dense orthogonal skips, because all units at layer l are used, even if some of them are deactivated, the units at layer l + 1 can still be disambiguated with the remaining active units. Figure 7b confirms for the CIFAR-100 dataset that throughout most of the training, the hidden units of the network with dense orthogonal skip connections have a lower probability of zero responses than those of the network with identity skip connections. Next, we gradually decreased the degree of “orthogonality” of the skip connectivity matrix to see how the orthogonality of the matrix affects performance. Starting from a random dense orthogonal matrix, we first divided the matrix into two halves and copied the first half to the second half. Starting from n orthonormal vectors, this reduces the number of orthonormal vectors to n/2. We continued on like this until the columns of the matrix were repeats of a single unit vector. We predict that as the number of orthonormal vectors in the skip connectivity matrix is decreased, the performance should deteriorate, because both the permutation symmetry-breaking capacity and the linear-dependencebreaking capacity of the skip connectivity matrix are reduced. Figure 7 shows the results for n = 128 hidden units. Darker colors correspond to “more orthogonal” matrices (e.g. “128” means all 128 skip vectors are orthonormal to each other, “1” means all 128 vectors are identical). The blue line is the identity skip connectivity. More orthogonal skip connectivity matrices yield better performance, consistent with our hypothesis. The less orthogonal skip matrices also suffer from the vanishing gradients problem. So, their failure could be partly attributed to the vanishing gradients problem. To control for this effect, we also designed skip connectivity matrices with eigenvalues on the unit circle (hence with eigenvalue spectra equivalent to an orthogonal matrix), but with varying degrees of orthogonality (see Supplementary Note 6 for details). More specifically, the columns (or rows) of an orthogonal matrix are orthonormal to each other, hence the covariance matrix of these vectors is the identity matrix. We designed matrices where this covariance matrix was allowed to have non-zero off-diagonal values, reflecting the fact that the vectors are not orthogonal any more. By controlling the magnitude of the correlations between the vectors, we manipulated the degree of orthogonality of the vectors. We achieved this by setting the eigenvalue spectrum of the covariance matrix to be given by λi = exp(−τ(i−1)) where λi denotes the i-th eigenvalue of the covariance matrix and τ is the parameter that controls the degree of orthogonality: τ = 0 corresponds to the identity covariance matrix, hence to an orthonormal set of vectors, whereas larger values of τ correspond to gradually more correlated vectors. This orthogonality manipulation was done while fixing the eigenvalue spectrum of the skip connectivity matrix to be on the unit circle. Hence, the effects of this manipulation cannot be attributed to any change in the eigenvalue spectrum, but only to the degree of orthogonality of the skip vectors. The results of this experiment are shown in Figure 8. More orthogonal skip connectivity matrices still perform better than less orthogonal ones (Figure 8c-d), even when their eigenvalue spectrum is fixed and the vanishing gradients problem does not arise (Figure 8b), suggesting that the results of the earlier experiment (Figure 7) cannot be explained solely by the vanishing gradients problem. 3 DISCUSSION In this paper, we proposed a novel explanation for the benefits of skip connections in terms of the elimination of singularities. Our results suggest that elimination of singularities contributes at least partly to the success of skip connections. However, we emphasize that singularity elimination is not the only factor explaining the benefits of skip connections. Even in completely non-degenerate models, other independent factors such as the behavior of gradient norms would affect training performance. Indeed, we presented evidence suggesting that skip connections are also quite effective at dealing with the problem of vanishing gradients and not every form of singularity elimination can be expected to be equally good at dealing with such additional problems that beset the training of deep networks. Alternative explanations: Several of our experiments rule out vanishing gradients as the sole explanation for training difficulties in deep networks and strongly suggest an independent role for the singularities arising from the non-identifiability of the model. (i) In Figure 4, all nets have the exact same plain architecture and similarly vanishing gradients at the beginning of training, yet they have diverging performances correlated with measures of distance from singular manifolds. (ii) Vanishing gradients cannot explain the difference between identity skips and dense orthogonal skips in Figure 7, because both eliminate vanishing gradients, yet dense orthogonal skips perform better. (iii) In Figure 8, spectrum-equalized non-orthogonal skips often have larger gradient norms, yet worse performance than orthogonal skips. (iv) Vanishing gradients cannot even explain the BiasReg results in Figure 5. The BiasReg and the plain net have almost identical (and vanishing) gradients early on in training (Figure 6a), yet the former has better performance as predicted by the symmetry-breaking hypothesis. (v) Similar results hold for two-layer shallow networks where the problem of vanishing gradients does not arise (Supplementary Note 7). In particular, shallow residual nets are less degenerate and have better accuracy than shallow plain nets; moreover, gradient norms and accuracy are strongly correlated with distance from the overlap manifolds in these shallow nets. Our malicious initialization experiment with residual nets (Figure 5) suggests that the benefits of skip connections cannot be explained solely in terms of well-conditioning or improved initialization either. This result reveals a fundamental weakness in purely linear explanations of the benefits of skip connections (Hardt & Ma, 2016; Li et al., 2016). Unlike in nonlinear nets, improved initialization entirely explains the benefits of skip connections in linear nets (Supplementary Note 5). A recent paper (Balduzzi et al., 2017) suggested that the loss of spatial structure in the covariance of the gradients, a phenomenon called “shattered gradients”, could be partly responsible for training difficulties in deep nonlinear networks. They argued that skip connections alleviate this problem by essentially making the model “more linear”. It is easy to see that the shattered gradients problem is distinct from both the vanishing/exploding gradients problem and the degeneracy problems considered in this paper, since shattered gradients arise only in sufficiently non-linear deep networks (linear networks do not shatter gradients), whereas vanishing/exploding gradients, as well as the degeneracies considered here, arise in linear networks too. The relative contribution of each of these distinct problems to training difficulties in deep networks remains to be determined. Symmetry-breaking in other architectures: We only reported results from experiments with fullyconnected networks, but we note that limited receptive field sizes and weight sharing between units in a single feature channel in convolutional neural networks also reduce the permutation symmetry in a given layer. The symmetry is not entirely eliminated since although individual units do not have permutation symmetry in this case, feature channels do, but they are far fewer in number than the number of individual units. Similarly, a recent extension of the residual architecture called ResNeXt (Xie et al., 2016) uses parallel, segregated processing streams inside the “bottleneck” blocks, which can again be seen as a way of reducing the permutation symmetry inside the block. Our method of singularity reduction through bias regularization (BiasReg; Figure 5) can be thought of as indirectly putting a prior over the unit activities. More complicated joint priors over hidden unit responses that favor decorrelated (Cogswell et al., 2015) or clustered (Liao et al., 2016) responses have been proposed before. Although the primary motivation for these regularization schemes was to improve the generalizability or interpretability of the learned representations, they can potentially be understood from a singularity elimination perspective as well. For example, a prior that favors decorrelated responses can facilitate the breaking of permutation symmetries and linear dependencies between hidden units. Our results lead to an apparent paradox: over-parametrization and redundancy in large neural network models have been argued to make optimization easier. Yet, our results seem to suggest the opposite. However, there is no contradiction here. Any apparent contradiction is due to potential ambiguities in the meanings of the terms “over-parametrization” and “redundancy”. The intuition behind the benefits of over-parametrization for optimization is an increase in the effective capacity of the model: over-parametrization in this sense leads to a large number of approximately equally good ways of fitting the training data. On the other hand, the degeneracies considered in this paper reduce the effective capacity of the model, leading to optimization difficulties. Our results suggest that it could be useful for neural network researchers to pay closer attention to the degeneracies inherent in their models. For better optimization, as a general design principle, we recommend reducing such degeneracies in a model as much as possible. Once the training performance starts to saturate, however, degeneracies may help the model achieve a better generalization performance. Exploring this trade-off between the harmful and beneficial effects of degeneracies is an interesting direction for future work. Acknowledgments: AEO and XP were supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government. SUPPLEMENTARY MATERIALS SUPPLEMENTARY NOTE 1: SINGULARITY OF THE HESSIAN IN NON-LINEAR MULTILAYER NETWORKS Because the cost function can be expressed as a sum over training examples, it is enough to consider the cost for a single example: E = 12 ||y − xL|| 2 ≡ 12e >e, where xl are defined recursively as xl = f(Wl−1xl−1) for l = 1, . . . , L. We denote the inputs to units at layer l by the vector hl: hl = Wl−1xl−1. We ignore the biases for simplicity. The derivative of the cost function with respect to a single weight Wl,ij between layers l and l + 1 is given by: ∂E ∂Wl,ij = − 0 ... f ′(hl+1,i)xl,j ... 0 > W>l+1diag(f ′ l+2)W > l+2diag(f ′ l+3) · · ·W>L−1diag(f ′L)e (1) Now, consider a different connection between the same output unit i at layer l + 1 and a different input unit j′ at layer l. The crucial thing to note is that if the units j and j′ have the same set of incoming weights, then the derivative of the cost function with respect to Wl,ij becomes identical to its derivative with respect to Wl,ij′ : ∂E/∂Wl,ij = ∂E/∂Wl,ij′ . This is because in this condition xl,j′ = xl,j for all possible inputs and all the remaining terms in Equation 1 are independent of the input index j. Thus, the columns (or rows) corresponding to the connections Wl,ij and Wl,ij′ in the Hessian become identical, making the Hessian degenerate. This is a re-statement of the simple observation that when the units j and j′ have the same set of incoming weights, the parameters Wl,ij and Wl,ij′ become non-identifiable (only their sum is identifiable). Thus, this corresponds to an overlap singularity. A similar argument shows that when a set of units at layer l, say units indexed j, j′, j′′ become linearly dependent, the columns of the Hessian corresponding to the weights Wl,ij , Wl,ij′ and Wl,ij′′ become linearly dependent as well, thereby making the Hessian singular. Again, this is just a re-statement of the fact that these weights are no longer individually identifiable in this case (only a linear combination of them is identifiable). This corresponds to a linear dependence singularity. In non-linear networks, except in certain degenerate cases where the units saturate together, they may never be exactly linearly dependent, but they can be approximately linearly dependent, which makes the Hessian close to singular. Moreover, it is easy to see from Equation 1 that, when the presynaptic unit xl,j is always zero, i.e. when that unit is effectively killed, the column (or row) of the Hessian corresponding to the parameter Wl,ij becomes the zero vector for any i, and thus the Hessian becomes singular. This is a re-statement of the simple observation that when the unit xl,j is always zero, its outgoing connections, Wl,ij , are no longer identifiable. This corresponds to an elimination singularity. In the residual case, the only thing that changes in Equation 1 is that the factors W>k diag(f ′ k+1) on the right-hand side become W>k diag(f ′ k+1)+ I where I is an identity matrix of the appropriate size. The overlap singularities are eliminated, because xl,j′ and xl,j cannot be the same for all possible inputs in the residual case (even when the adjustable incoming weights of these units are identical). Similarly, elimination singularities are also eliminated, because xl,j cannot be identically zero for all possible inputs (even when the adjustable incoming weights of this unit are all zero), assuming that the corresponding unit at the previous layer xl−1,j is not always zero, which, in turn, is guaranteed with an identity skip connection if xl−2,j is not always zero etc., all the way down to the first hidden layer. Any linear dependence between xl,j , xl,j′ and xl,j′′ is also eliminated by adding linearly independent inputs to them, assuming again that the corresponding units in the previous layer are linearly independent. SUPPLEMENTARY NOTE 2: SIMULATION DETAILS In Figure 3, for the skip connections between non-adjacent layers in the hyper-residual networks, i.e. Qk, we used matrices of the type labeled “32” in Figure 7, i.e. matrices consisting of four copies of a set of 32 orthonormal vectors. We found that these matrices performed slightly better than orthogonal matrices. We augmented the training data in both CIFAR-10 and CIFAR-100 by adding reflected versions of each training image, i.e. their mirror images. This yields a total of 100000 training images for both datasets. The test data were not augmented, consisting of 10000 images in both cases. We used the standard splits of the data into training and test sets. For the BiasReg network of Figures 5-6, random hyperparameter search returned the following values for the target bias distributions: µ = 0.51, σ = 0.96 for CIFAR-10 and µ = 0.91, σ = 0.03 for CIFAR-100. The toy model shown in Figure 2b-c consists of the simulation of Equations 3.7 and 3.9 in Wei et al. (2008). The toy model shown in Figure 2e is the simulation of learning dynamics in a network with 3 input, 3 hidden and 3 output units, parametrized in terms of the norms and unit-vector directions of Ja−Jb−Jc, Ja+Jb−Jc, Jc, and the output weights. A teacher model with random parameters is first chosen and a large set of “training data” is generated from the teacher model. Then the gradient flow fields with respect to the two parameters m = ||Ja + Jb − Jc|| and ||Jc|| are plotted with the assumption that the remaining parameters are already at their optima (a similar assumption was made in the analysis of Wei et al. (2008)). We empirically confirmed that the flow field is generic. SUPPLEMENTARY NOTE 3: ESTIMATING THE EIGENVALUE SPECTRAL DENSITY OF THE HESSIAN IN DEEP NETWORKS We use Skilling’s moment matching method (Skilling, 1989) to estimate the eigenvalue spectra of the Hessian. We first estimate the first few non-central moments of the density by computing mk = 1 N r >Hkr where r is a random vector drawn from the standard multivariate Gaussian with zero mean and identity covariance, H is the Hessian and N is the dimensionality of the parameter space. Because the standard multivariate Gaussian is rotationally symmetric and the Hessian is a symmetric matrix, it is easy to show that mk gives an unbiased estimate of the k-th moment of the spectral density: mk = 1 N r>Hkr = 1 N N∑ i=1 r̃2iλ k i → ∫ p(λ)λkdλ as N →∞ (2) where λi are the eigenvalues of the Hessian, and p(λ) is the spectral density of the Hessian as N → ∞. In Equation 2, we make use of the fact that r̃2i are random variables with expected value 1. Despite appearances, the products in mk do not require the computation of the Hessian explicitly and can instead be computed efficiently as follows: v0 = r, vk = Hvk−1 k = 1, . . . ,K (3) where the Hessian times vector computation can be performed without computing the Hessian explicitly through Pearlmutter’s R-operator (Pearlmutter, 1994). In terms of the vectors vk, the estimates of the moments are given by the following: m2k = 1 N v>k vk, m2k+1 = 1 N v>k vk+1 (4) For the results shown in Figure 3, we use 20-layer fully-connected feedforward networks and the number of parameters is N = 709652. For the remaining simulations, we use 30-layer fullyconnected networks and the number of parameters is N = 874772. We estimate the first four moments of the Hessian and fit the estimated moments with a parametric density model. The parametric density model we use is a mixture of a narrow Gaussian distribution (to capture the bulk of the density) and a skew-normal distribution (to capture the tails): q(λ) = wSN (λ; ξ, ω, α) + (1− w)N (λ; 0, σ = 0.001) (5) with 4 parameters in total: the mixture weight w, and the location ξ, scale ω and shape α parameters of the skew-normal distribution. We fix the parameters of the Gaussian component to µ = 0 and σ = 0.001. Since the densities are heavy-tailed, the moments are dominated by the tail behavior of the model, hence the fits are not very sensitive to the precise choice of the parameters of the Gaussian component. The moments of our model can be computed in closed-form. We had difficulty fitting the parameters of the model with gradient-based methods, hence we used a simple grid search method instead. The ranges searched over for each parameter was as follows. w: logarithmically spaced between 10−9 and 10−3; α: linearly spaced between−50 and 50; ξ: linearly spaced between −10 and 10; ω: logarithmically spaced between 10−1 and 103. 100 parameters were evaluated along each parameter dimension for a total of 108 parameter configurations evaluated. The estimated moments ranged over several orders of magnitude. To make sure that the optimization gave roughly equal weight to fitting each moment, we minimized a normalized objective function: L(w,α, ξ, ω) = 4∑ k=1 |m̂k(w,α, ξ, ω)−mk| |mk| (6) where m̂k(w,α, ξ, ω) is the model-derived estimate of the k-th moment. SUPPLEMENTARY NOTE 4: VALIDATION OF THE RESULTS WITH SMALLER NETWORKS Here, we validate our main results for smaller, numerically tractable networks. The networks in this section are 10-layer fully-connected feedforward networks. The networks are trained on CIFAR100. The input dimensionality is reduced from 3072 to 128 through PCA. In what follows, we calculate the fraction of degenerate eigenvalues by counting the number of eigenvalues inside a small window of size 0.2 around 0, and the fraction of negative eigenvalues by the number of eigenvalues to the left of this window. We first compare residual networks with plain networks (Figure S1). The networks here have 16 hidden units in each layer yielding a total of 4852 parameters. This is small enough that we can calculate all eigenvalues of the Hessian numerically. We observe that residual networks have better training and test performance (Figure S1a-b); they are less degenerate (Figure S1d) and have more negative eigenvalues than plain networks (Figure S1c). These results are consistent with the results reported in Figure 3 for deeper and larger networks. Next, we validate the results reported in Figure 4 by running 400 independent plain networks and comparing the best-performing 40 with the worst-performing 40 among them (Figure S2). Again, the networks here have 16 hidden units in each layer with a total of 4852 parameters. We observe that the best networks are less degenerate (Figure S2d) and have more negative eigenvalues than the worst networks (Figure S2c). Moreover, the hidden units of the best networks have less overlap (Figure S2f), and, at least initially during training, have slightly larger weight norms than the worstperforming networks (Figure S2e). Again, these results are all consistent with those reported in Figure 4 for deeper and larger networks. Finally, using numerically tractable plain networks, we also tested whether we could reliably estimate the fractions of degenerate and negative eigenvalues with our mixture model. Just as we do for the larger networks, we first fit the mixture model to the first four moments of the spectral density estimated with the method of Skilling (1989). We then estimate the fraction of degenerate and negative eigenvalues from the fitted mixture model and compare these estimates with those obtained from the numerically calculated actual eigenvalues. Because for the larger networks, the networks were found to be highly degenerate, we restrict the analysis here to conditions where the fraction of degenerate eigenvalues was at least 99.8%. We used 10-layer plain networks with 32 hidden 0 150 300 Epoch 0 40 T ra in in g  ac cu ra cy  ( % ) a Plain Residual 0 150 300 Epoch 0 40 T es t a cc ur ac y  (% ) b 0 150 300 Epoch 0 6 N eg at iv e  ei gs . ( % ) c 0 150 300 Epoch 75 100 D eg en er at e  ei gs . ( % ) d Figure S1: Validation of the results with 10-layer plain and residual networks trained on CIFAR-100. (a-b) Training and test accuracy. (c-d) Fraction of negative and degenerate eigenvalues throughout training. The results are averages over 4 independent runs ±1 standard errors. 0 15 30 0 35a Training accuracy (%) Best 40 Worst 40 0 15 30 0 35b Test accuracy (%) 0 15 30 0 5c Negative eigs. (%) 0 15 30 75 100d Degenerate eigs. (%) 0 15 30 Epoch 1 1.2e Mean norm 0 15 30 Epoch 0.0075 0.005 0.0025 0 0.0025 f Mean overlap Figure S2: Validation of the results with 400 10-layer plain networks with 16 hidden units in each layer (4852 parameters total) trained on CIFAR-100. We compare the best 40 networks with the worst 40 networks, as in Figure 4. (a-b) Training and test accuracy. (c-d) Fraction of negative and degenerate eigenvalues throughout training. Better performing networks are less degenerate and have more negative eigenvalues. (e) Mean norms of the incoming weight vectors of the hidden units. (f) Mean overlaps of the hidden units as measured by the mean correlation between their incoming weight vectors. The results are averages over 40 best or worst runs ±1 standard errors. 99.75 99.80 99.85 99.90 99.95 100 Actual 99.75 99.80 99.85 99.90 99.95 100 E st im at ed Degenerate eigs. (%) 0 0.01 0.02 0.03 0.04 0.05 Actual 0 0.01 0.02 0.03 0.04 0.05 E st im at ed Negative eigs. (%) Figure S3: For 10-layer plain networks with 32 hidden units in each layer (14292 parameters total), estimates obtained from the mixture model slightly underestimate the fraction of degenerate eigenvalues, and overestimate the fraction of negative eigenvalues; however, there is a highly significant linear relationship between the actual values and the estimates. (a) Actual vs. estimated fraction of degenerate eigenvalues. (b) Actual vs. estimated fraction of negative eigenvalues for the same networks. Dashed line shows the identity line. Dots and errorbars represent means and standard errors of estimates in different bins; the solid lines and the shaded regions represent the linear regression fits and the 95% confidence intervals. units in each layer (with a total of 14292 parameters) for this analysis. We observe that, at least for these small networks, the mixture model usually underestimates the fraction of degenerate eigenvalues and overestimates the fraction of negative eigenvalues. However, there is a highly significant positive correlation between the actual and estimated fractions (Figure S3). SUPPLEMENTARY NOTE 5: DYNAMICS OF LEARNING IN LINEAR NETWORKS WITH SKIP CONNECTIONS To get a better analytic understanding of the effects of skip connections on the learning dynamics, we turn to linear networks. In an L-layer linear plain network, the input-output mapping is given by (again ignoring the biases for simplicity): xL = WL−1WL−2 . . .W1x1 (7) where x1 and xL are the input and output vectors, respectively. In linear residual networks with identity skip connections between adjacent layers, the input-output mapping becomes: xL = (WL−1 + I)(WL−2 + I) . . . (W1 + I)x1 (8) Finally, in hyper-residual linear networks where all skip connection matrices are assumed to be the identity, the input-output mapping is given by: xL = ( WL−1 + (L− 1)I )( WL−2 + (L− 2)I ) . . . ( W1 + I ) x1 (9) In the derivations to follow, we do not have to assume that the connectivity matrices are square matrices. If they are rectangular matrices, the identity matrix I should be interpreted as a rectangular identity matrix of the appropriate size. This corresponds to zero-padding the layers when they are not the same size, as is usually done in practice. Three-layer networks: Dynamics of learning in plain linear networks with no skip connections was analyzed in Saxe et al. (2013). For a three-layer network (L = 3), the learning dynamics can be expressed by the following differential equations (Saxe et al., 2013): τ d dt aα = (sα − aα · bα)bα − ∑ γ 6=α (aα · bγ)bγ (10) τ d dt bα = (sα − aα · bα)aα − ∑ γ 6=α (aγ · bα)aγ (11) Here aα and bα are n-dimensional column vectors (where n is the number of hidden units) connecting the hidden layer to the α-th input and output modes, respectively, of the input-output correlation matrix and sα is the corresponding singular value (see Saxe et al. (2013) for further details). The first term on the right-hand side of Equations 10-11 facilitates cooperation between aα and bα corresponding to the same input-output mode α, while the second term encourages competition between vectors corresponding to different modes. In the simplest scenario where there are only two input and output modes, the learning dynamics of Equations 10, 11 reduces to: d dt a1 = (s1 − a1 · b1)b1 − (a1 · b2)b2 (12) d dt a2 = (s2 − a2 · b2)b2 − (a2 · b1)b1 (13) d dt b1 = (s1 − a1 · b1)a1 − (a1 · b2)a2 (14) d dt b2 = (s2 − a2 · b2)a2 − (a2 · b1)a1 (15) How does adding skip connections between adjacent layers change the learning dynamics? Considering again a three-layer network (L = 3) with only two input and output modes, a straightforward extension of Equations 12-15 shows that the learning dynamics changes as follows: d dt a1 = [ s1 − (a1 + v1) · (b1 + u1) ] (b1 + u1)− [ (a1 + v1) · (b2 + u2) ] (b2 + u2) (16) d dt a2 = [ s2 − (a2 + v2) · (b2 + u2) ] (b2 + u2)− [ (a2 + v2) · (b1 + u1) ] (b1 + u1) (17) d dt b1 = [ s1 − (a1 + v1) · (b1 + u1) ] (a1 + v1)− [ (a1 + v1) · (b2 + u2) ] (a2 + v2) (18) d dt b2 = [ s2 − (a2 + v2) · (b2 + u2) ] (a2 + v2)− [ (a2 + v2) · (b1 + u1) ] (a1 + v1) (19) where u1 and u2 are orthonormal vectors (similarly for v1 and v2). The derivation proceeds essentially identically to the corresponding derivation for plain networks in Saxe et al. (2013). The only differences are: (i) we substitute the plain weight matrices Wl with their residual counterparts Wl + I and (ii) when changing the basis from the canonical basis for the weight matrices W1, W2 to the input and output modes of the input-output correlation matrix, U and V, we note that: W2 + I = UW2 + UU > = U(W2 + U >) (20) W1 + I = W1V > + VV> = (W1 + V)V > (21) where U and V are orthogonal matrices and the vectors aα, bα, uα and vα in Equations 16-19 correspond to the α-th columns of the matrices W1, W > 2 , U and V, respectively. Figure S4 shows, for two different initializations, the evolution of the variables a1 and a2 in plain and residual networks with two input-output modes and two hidden units. When the variables are initialized to small random values, the dynamics in the plain network initially evolves slowly (Figure S4a, blue); whereas it is much faster in the residual network (Figure S4a, red). This effect is attributable to two factors. First, the added orthonormal vectors uα and vα increase the initial velocity of the variables in the residual network. Second, even when we equalize the initial norms of the vectors, aα and aα + vα (and those of the vectors bα and bα + uα) in the plain and the residual networks, respectively, we still observe an advantage for the residual network (Figure S4b), because the cooperative and competitive terms are orthogonal to each other in the residual network (or close to orthogonal, depending on the initialization of aα and bα; see right-hand side of Equations 16-19), whereas in the plain network they are not necessarily orthogonal and hence can cancel each other (Equations 12-15), thus slowing down convergence. Singularity of the Hessian in linear three-layer networks: The dynamics in Equations 10, 11 can be interpreted as gradient descent on the following energy function: E = 1 2τ ∑ α (sα − aα · bα)2 + 1 2τ ∑ α 6=β (aα · bβ)2 (22) 0 200 400 Time -1.5 0 1.5 a 1 a plain residual 0 200 400 Time -1.5 0 1.5 a 2 0 200 400 Time 0 1 2 a 1 b 0 200 400 Time -0.5 0 1 a 2 Figure S4: Evolution of a1 and a2 in linear plain and residual networks (evolution of b1 and b2 proceeds similarly). The weights converge faster in residual networks. Simulation details are as follows: the number of hidden units is 2 (the two solid lines for each color represent the weights associated with the two hidden nodes, e.g. a11 and a 1 2 on the left), the singular values are s1 = 3.0, s2 = 1.5. For the residual network, u1 = v1 = [1/ √ 2, 1/ √ 2]> and u2 = v2 = [1/ √ 2,−1/ √ 2]>. In (a), the weights of both plain and residual networks are initialized to random values drawn from a Gaussian with zero mean and standard deviation of 0.0001. The learning rate was set to 0.1. In (b), the weights of the plain network are initialized as follows: the vectors a1 and a2 are initialized to [1/ √ 2, 1/ √ 2]> and the vectors b1 and b2 are initialized to [1/ √ 2,−1/ √ 2]>; the weights of the residual network are all initialized to zero, thus equalizing the initial norms of the vectors aα and aα + vα (and those of the vectors bα and bα + uα) between the plain and residual networks. The residual network still converges faster than the plain network. In (b), the learning rate was set to 0.01 to make the different convergence rates of the two networks more visible. This energy function is invariant to a (simultaneous) permutation of the elements of the vectors aα and bα for all α. This causes degenerate manifolds in the landscape. Specifically, for the permutation symmetry of hidden units, these manifolds are the hyperplanes aαi = a α j ∀α, for each pair of hidden units i, j (similarly, the hyperplanes bαi = b α j ∀α) that make the model non-identifiable. Formally, these correspond to the singularities of the Hessian or the Fisher information matrix. Indeed, we shall quickly check below that when aαi = a α j ∀α for any pair of hidden units i, j, the Hessian becomes singular (overlap singularities). The Hessian also has additional singularities at the hyperplanes aαi = 0 ∀α for any i and at bαi = 0 ∀α for any i (elimination singularities). Starting from the energy function in Equation 22 and taking the derivative with respect to a single input-to-hidden layer weight, aαi : ∂E ∂aαi = −(sα − aα · bα)bαi + ∑ β 6=α (aα · bβ)bβi (23) and the second derivatives are as follows: ∂2E ∂(aαi ) 2 = (bαi ) 2 + ∑ β 6=α (bβi ) 2 = ∑ β (bβi ) 2 (24) ∂2E ∂aαi ∂a α j = bαj b α i + ∑ β 6=α bβj b β i = ∑ β bβi b β j (25) Note that the second derivatives are independent of mode index α, reflecting the fact that the energy function is invariant to a permutation of the mode indices. Furthermore, when bβi = b β j for all β, the columns in the Hessian corresponding to aαi and a α j become identical, causing an additional degeneracy reflecting the non-identifiability of aαi and a α j . A similar derivation establishes that aβi = a β j for all β also leads to a degeneracy in the Hessian, this time reflecting the non-identifiability of bαi and b α j . These correspond to the overlap singularities. In addition, it is easy to see from Equations 24, 25 that when bαi = 0 ∀α, the right-hand sides of both equations become identically zero, reflecting the non-identifiability of aαi for all α. A similar derivation shows that when aαi = 0 ∀α, the columns of the Hessian corresponding to bαi become identically zero for all α, this time reflecting the non-identifiability of bαi for all α. These correspond to the elimination singularities. When we add skip connections between adjacent layers, i.e. in the residual architecture, the energy function changes as follows: E = 1 2 ∑ α (sα − (aα + vα) · (bα + uα))2 + 1 2 ∑ α6=β ((aα + vα) · (bβ + uβ))2 (26) and straightforward algebra yields the following second derivatives: ∂2E ∂(aαi ) 2 = ∑ β (bβi + u β i ) 2 (27) ∂2E ∂aαi ∂a α j = ∑ β (bβi + u β i )(b β j + u β j ) (28) Unlike in the plain network, setting bβi = b β j for all β, or setting b α i = 0 ∀α, does not lead to a degeneracy here, thanks to the orthogonal skip vectors uβ . However, this just shifts the locations of the singularities. In particular, the residual network suffers from the same overlap and elimination singularities as the plain network when we make the following change of variables: bβ → bβ − uβ and aβ → aβ − vβ . Networks with more than three-layers: As shown in Saxe et al. (2013), in linear networks with more than a single hidden layer, assuming that there are orthogonal matrices Rl and Rl+1 for each layer l that diagonalize the initial weight matrix of the corresponding layer (i.e. R>l+1Wl(0)Rl = Dl is a diagonal matrix), dynamics of different singular modes decouple from each other and each -2 0 2 a -2 0 2 b a Plain -2 0 2 -2 0 2 Residual -2 0 2 -2 0 2 Hyper−residual 0 200 400 Time 0 1 2 u b hyper-residual residual plain Figure S5: (a) Phase portraits for three-layer plain, residual and hyper-residual linear networks. (b) Evolution of u = ∏Nl−1 l=1 al for 10-layer plain, residual and hyper-residual linear networks. In the plain network, u did not converge to its asymptotic value s within the simulated time window. mode α evolves according to gradient descent dynamics in an energy landscape described by (Saxe et al., 2013): Eplain = 1 2τ ( sα − Nl−1∏ l=1 aαl )2 (29) where aαl can be interpreted as the strength of mode α at layer l and Nl is the total number of layers. In residual networks, assuming further that the orthogonal matrices Rl satisfy R>l+1Rl = I, the energy function changes to: Eres = 1 2τ ( sα − Nl−1∏ l=1 (aαl + 1) )2 (30) and in hyper-residual networks, it is: Ehyperres = 1 2τ ( sα − Nl−1∏ l=1 (aαl + l) )2 (31) Figure S5a illustrates the effect of skip connections on the phase portrait of a three layer network. The two axes, a and b, represent the mode strength variables for l = 1 and l = 2, respectively: i.e. a ≡ aα1 and b ≡ aα2 . The plain network has a saddle point at (0, 0) (Figure S5a; left). The dynamics around this point is slow, hence starting from small random values causes initially very slow learning. The network funnels the dynamics through the unstable manifold a = b to the stable hyperbolic solution corresponding to ab = s. Identity skip connections between adjacent layers in the residual architecture move the saddle point to (−1,−1) (Figure S5a; middle). This speeds up the dynamics around the origin, but not as much as in the hyper-residual architecture where the saddle point is moved further away from the origin and the main diagonal to (−1,−2) (Figure S5a; right). We found these effects to be more pronounced in deeper networks. Figure S5b shows the dynamics of learning in 10-layer linear networks, demonstrating a clear advantage for the residual architecture over the plain architecture and for the hyper-residual architecture over the residual architecture. Singularity of the Hessian in reduced linear multilayer networks with skip connections: The derivative of the cost function of a linear multilayer residual network (Equation 30) with respect to the mode strength variable at layer i, ai, is given by (suppressing the mode index α and taking τ = 1): ∂E ∂ai = −(s− u) ∏ l 6=i (al + 1) (32) and the second derivatives are: ∂2E ∂a2i = [∏ l 6=i (al + 1) ]2 (33) ∂2E ∂ai∂ak = [ 2 ∏ l (al + 1)− s ] ∏ l 6=i,k (al + 1) (34) It is easy to check that the columns (or rows) corresponding to ai and aj in the Hessian become identical when ai = aj , making the Hessian degenerate. The hyper-residual architecture does not eliminate these degeneracies but shifts them to different locations in the parameter space by adding distinct constants to ai and aj (and to all other variables). SUPPLEMENTARY NOTE 6: DESIGNING SKIP CONNECTIVITY MATRICES WITH VARYING DEGREES OF ORTHOGONALITY AND WITH EIGENVALUES ON THE UNIT CIRCLE We generated the covariance matrix of the eigenvectors by S = QΛQ>, where Q is a random orthogonal matrix and Λ is the diagonal matrix of eigenvalues, Λii = exp(−τ(i−1)), as explained in the main text. We find the correlation matrix through R = D−1/2SD−1/2 where D is the diagonal matrix of the variances: i.e. Dii = Sii. We take the Cholesky decomposition of the correlation matrix, R = TT>. Then the designed skip connectivity matrix is given by Σ = TULU−1T−1, where L and U are the matrices of eigenvalues and eigenvectors of another randomly generated orthogonal matrix, O: i.e. O = ULU>. With this construction, Σ has the same eigenvalue spectrum as O, however the eigenvectors of Σ are linear combinations of the eigenvectors of O such that their correlation matrix is given by R. Thus, the eigenvectors of Σ are not orthogonal to each other unless τ = 0. Larger values of τ yield more correlated, hence less orthogonal, eigenvectors. SUPPLEMENTARY NOTE 7: VALIDATION OF THE RESULTS WITH SHALLOW NETWORKS To further demonstrate the generality of our results and the independence of the problem of singularities from the vanishing gradients problem in optimization, we performed an experiment with shallow plain and residual networks with only two hidden layers and 16 units in each hidden layer. Because we do not allow skip connections from the input layer, a network with two hidden layers is the shallowest network we can use to compare the plain and residual architectures. Figure S6 shows the results of this experiment. The residual network performs slightly better both on the training and test data (Figure S6a-b); it is less degenerate (Figure S6d) and has more negative eigenvalues (Figure S6c); it has larger gradients (Figure S6e) —note that the gradients in the plain network do not vanish even at the beginning of training— and its hidden units have less overlap than the plain network (Figure S6f). Moreover, the gradient norms closely track the mean overlap between the hidden units and the degeneracy of the network (Figure S6d-f) throughout training. These results suggest that the degeneracies caused by the overlaps of hidden units slow down learning, consistent with our symmetry-breaking hypothesis and with the results from larger networks. 0 40 T ra in in g  ac c.  ( % ) a Plain Residual 0 40 T es t a cc . ( % ) b 0 5 N eg at iv e  ei gs . ( % ) c 75 100 D eg en er at e  ei gs . ( % ) d 0 5 10 15 20 25 Epoch 0 0.0015 M ea n  gr ad . n or m e Layer 1 Layer 2 0 5 10 15 20 25 Epoch 0.02 0 0.02 M ea n  ov er la p f Figure S6: Main results hold for two-layer shallow nets trained on CIFAR-100. (a-b) Training and test accuracies. Residual nets perform slightly better. (c-d) Fraction of negative and degenerate eigenvalues. Residual nets are less degenerate. (e) Mean gradient norms with respect to the two layer activations throughout training. (f) Mean overlap for the second hidden layer units, measured as the mean correlation between the incoming weights of the hidden units. Results in (a-e) are averages over 16 independent runs; error bars are small, hence not shown for clarity. In (f), error bars represent standard errors. 0 10 20 Epoch 0 50 100 T r.  a cc ur ac y  (% )a 30 layers (SVHN) Best 10 Worst 10 0 10 20 Epoch 99.99 100 D eg . e ig s.  ( % ) 0 10 20 Epoch 1 1.1 1.2 M ea n  no rm 0 10 20 Epoch 0 0.01 M ea n  ov er la p 0 10 20 Epoch 0.5 1 Li n.  d ep en de nc e 0 25 50 Epoch 0 50 100 T r.  a cc ur ac y  (% )b 30 layers (STL10) Best 10 Worst 10 0 25 50 Epoch 99.999 100 D eg . e ig s.  ( % ) 0 25 50 Epoch 1 1.1 M ea n  no rm 0 25 50 Epoch 0 0.02 M ea n  ov er la p 0 25 50 Epoch 0.5 1 Li n.  d ep en de nc e Figure S7: Replication of the results reported in Figure 4 for (a) the Street View House Numbers (SVHN) dataset and (b) the STL-10 dataset.
1. What is the main contribution of the paper, and how does it relate to skip connections? 2. What are the strengths of the paper's experimental discussion? 3. What are the weaknesses of the paper, particularly regarding its lack of theoretical justification? 4. How accurate are the estimations of tail probabilities of the eigenvalues, and what impact does this have on the paper's conclusions? 5. Are the results of the paper generalizable across multiple datasets, or do they only hold for the specific datasets used in the experiments?
Review
Review This paper proposes to explain the benefits of skip connections in terms of eliminating the singularities of the loss function. The discussion is largely based on a sequence of experiments, some of which are interesting and insightful. The discussion here can be useful for other researchers. My main concern is that the result here is purely empirical, with no concrete theoretical justification. What the experiments reveal is an empirical correlation between the Eigval index and training accuracy, which can be caused by lots of reasons (and cofounders), and does not necessarily establish a causal relation. Therefore, i found many of the discussion to be questionable. I would love to see more solid theoretical discussion to justify the hypothesis proposed in this paper. Do you have a sense how accurate is the estimation of the tail probabilities of the eigenvalues? Because the whole paper is based on the approximation of the eigval indexes, it is critical to exam the estimation is accurate enough to draw the conclusions in the paper. All the conclusions are based on one or two datasets. Could you consider testing the result on more different datasets to verify if the results are generalizable?
ICLR
Title Play to Grade: Grading Interactive Coding Games as Classifying Markov Decision Process Abstract Contemporary coding education often present students with the task of developing programs that have user interaction and complex dynamic systems, such as mouse based games. While pedagogically compelling, grading such student programs requires dynamic user inputs, therefore they are difficult to grade by unit tests. In this paper we formalize the challenge of grading interactive programs as a task of classifying Markov Decision Processes (MDPs). Each student’s program fully specifies an MDP where the agent needs to operate and decide, under reasonable generalization, if the dynamics and reward model of the input MDP conforms to a set of latent MDPs. We demonstrate that by experiencing a handful of latent MDPs millions of times, we can use the agent to sample trajectories from the input MDP and use a classifier to determine membership. Our method drastically reduces the amount of data needed to train an automatic grading system for interactive code assignments and present a challenge to state-of-the-art reinforcement learning generalization methods. Together with Code.org, we curated a dataset of 700k student submissions, one of the largest dataset of anonymized student submissions to a single assignment. This Code.org assignment had no previous solution for automatically providing correctness feedback to students and as such this contribution could lead to meaningful improvement in educational experience. 1 INTRODUCTION The rise of online coding education platforms accelerates the trend to democratize high quality computer science education for millions of students each year. Corbett (2001) suggests that providing feedback to students can have an enormous impact on efficiently and effectively helping students learn. Unfortunately contemporary coding education has a clear limitation. Students are able to get automatic feedback only up until they start writing interactive programs. When a student authors a program that requires user interaction, e.g. where a user interacts with the student’s program using a mouse, or by clicking on button it becomes exceedingly difficult to grade automatically. Even for well defined challenges, if the user has any creative discretion, or the problem involves any randomness, the task of automatically assessing the work is daunting. Yet creating more open-ended assignments for students can be particularly motivating and engaging, and also help allow students to practice key skills that will be needed in commercial projects. Generating feedback on interactive programs from humans is more laborious than it might seem. Though the most common student solution to an assignment may be submitted many thousands of times, even for introductory computer science education, the probability distribution of homework submissions follows the very heavy tailed Zipf distribution – the statistical distribution of natural language. This makes grading exceptionally hard for contemporary AI (Wu et al., 2019) as well as massive crowd sourced human efforts (Code.org, 2014). While code as text has proved difficult to grade, actually running student code is a promising path forward (Yan et al., 2019). We formulate the grading via playing task as equivalent to classifying whether an ungraded student program – a new Markov Decision Process (MDP) – belongs to a latent class of correct Markov Decision Processes (representing correct programming solutions to the assignment). Given a discrete set of environments E = {en = (Sn,A, Rn, Pn) : n = 1, 2, 3, ...}, we can partition them into E? and E ′. E? is the set of latent MDPs. It includes a handful of reference programs that a teacher has implemented or graded. E ′ is the set of environments specified by student submitted programs. We are building a classifier that determines whether e, a new input decision process is behaviorally identical to the latent decision process. Prior work on providing feedback for code has focused on text-based syntactic analysis and automatically constructing solution space (Rivers & Koedinger, 2013; Ihantola et al., 2015). Such feedback orients around providing hints and unable to determine an interactive program’s correctness. Other intelligent tutoring systems focused on math or other skills that don’t require creating interactive programs (Ruan et al., 2019; 2020). Note that in principle one could analyze the raw code and seek to understand if the code produces a dynamics and reward model that is isomorphic to the dynamics and reward generated by a correct program. However, there are many different ways to express the same correct program and classifying such text might require a large amount of data: as a first approach, we avoid this by instead deploying a policy and observing the resulting program behavior, thereby generating execution traces of the student’s implicitly specified MDP that can be used for classification. Main contributions in this paper: • We introduce the reinforcement learning challenge of Play to Grade. • We propose a baseline algorithm where an agent learns to play a game and use features such as total reward and anticipated reward to determine correctness. • Our classifier obtains 93.1% accuracy on 8359 most frequent programs that cover 50% of the overall submissions and achieve 89.0% accuracy on programs that are submitted by less than 5 times. We gained 14-19% absolute improvement over grading programs via code text. • We will release a dataset of over 700k student submissions to support further research. 2 THE PLAY TO GRADE CHALLENGE We formulate the challenge with constraints that are often found in the real world. Given an interactive coding assignment, teacher often has a few reference implementations of the assignment. Teachers use them to show students what a correct solution should look like. We also assume that the teacher can prepare a few incorrect implementations that represent their “best guesses” of what a wrong program should look like. To formalize this setting, we consider a set of programs, each fully specifies an environment and its dynamics: E = {en = (Sn,A, Rn, Pn) : n = 1, 2, 3, ...}. A subset of these environments are reference environments that are accessible during training, we refer to them as E?, and we also have a set of environments that are specified by student submitted programs E ′. We can further specify a training set D = {(τ i, yi); y ∈ {0, 1}} where τ i ∼ π(e(i)) and e(i) ∼ E?, and a test set Dtest where e(i) ∼ E ′. The overall objective of this challenge is: minL(θ) = min θ min π Ee∼E [Eτ ′∼π(e)[L(pθ(φ(τ ′, π)), y)]] (1) We want a policy that can generate trajectory τ that can help a classifier easily distinguish between an input environment that is correctly implemented and one that is not. We also allow a feature mapping function φ that takes the trajectory and estimations from the agent as intput and output features for the classifier. We can imagine a naive classifier that labels any environment that is playable (defined by being able to obtain rewards) by our agent as correct. A trivial failure case for this classifier would be that if the agent is badly trained and fails to play successfully in a new environment (returning zero reward), we would not know if zero reward indicates the wrongness of the program or the failure of our agent. Generalization challenge In order to avoid the trivial failure case described above – the game states observed are a result of our agent’s failure to play the game, not a result of correctness or wrongness of the program, it is crucial that the agent operates successfully under different correct environments. For any correct environment, E+ = {E?+, E ′+}, the goal is for our agent to obtain the high expected reward. π? = argmax π Ee∼E+ [Eτ∼π(e)[R(τ)]] (2) Additionally, we choose the state space to be the pixel-based screenshot of the game. This assumption imposes the least amount of modification on thousands of games that teaching platforms have created for students over the years. This decision poses a great challenge to our agent. When students create a game, they might choose to express their creativity through a myriad of ways, including but not limited to using exciting background pictures, changing shape or color or moving speed of the game objects, etc. Some of these creative expressions only affect game aesthetics, but other will affect how the game is played (i.e., changing the speed of an object). The agent needs to succeed in these creative settings so that the classifier will not treat creativity as incorrect. 2.1 BOUNCE GAME SIMULATOR We pick the coding game Bounce to be the main game for this challenge. Bounce is a block-based educational game created to help students understand conditionals1. We show actual game scenes in Figure 1, and the coding area in Figure 2a. 1https://studio.code.org/s/course3/stage/15/puzzle/10 The choice gives us three advantages. First, the popularity of this game on Code.org gives us an abundance of real student submissions over the years, allowing us to compare algorithms with real data. Second, a block-based program can be easily represented in a structured format, eliminating the need to write a domain-specific parser for student’s program. Last, in order to measure real progress of this challenge, we need gold labels for each submission. Block-based programming environment allows us to specify a list of legal and illegal commands under each condition which will provide perfect gold labels. The Bounce exercise does not have a bounded solution space, similar to other exercises developed at Code.org. This means that the student can produce arbitrarily long programs, such as repeating the same command multiple times (Figure 3(b)) or changing themes whenever a condition is triggered (Figure 3(a)). These complications can result in very different game dynamics. We created a simulator faithfully executes command under each condition and will return positive reward when “Score point” block is activated, and negative reward when “Socre opponent point” block is activated. In deployment, such simulator needs not be created because coding platforms have already created simulators to run and render student programs. 2.2 CODE.ORG BOUNCE DATASET Code.org is an online computer science education platform that teaches beginner programming. They designed a drag-and-drop interface to teach K-12 students basic programming concepts. Our dataset is compiled from 453,211 students. Each time a student runs their code, the submission is saved. In total, there are 711,274 submissions, where 323,516 unique programs were submitted. Evaluation metric In an unbounded solution space, the distribution of student submissions incur a heavy tail, observed by Wu et al. (2019). We show that the distribution of submission in dataset conforms to a Zipf distribution. This suggests that we can partition this dataset into two sections, as seen in Figure 2b. Head + Body: the 8359 most frequently submitted programs that covers 50.5% of the total submissions (359,266). This set contains 4,084 correct programs (48.9%) and 4,275 incorrect programs (51.1%). Tail: This set represents any programs that are submitted less than 5 times. There are 315,157 unique programs in this set and 290,953 of them (92.3%) were only submitted once. We sample 250 correct and 250 incorrect programs uniformly from this set for evaluation. Reference programs Before looking at the student submitted programs, we attempted to solve the assignment ourselves. Through our attempt, we form an understanding of where the student might make a mistake and what different variations of correct programs could look like. Our process can easily be replicated by teachers. We come up with 8 correct reference programs and 10 incorrect reference programs. This can be regarded as our training data. Gold annotations We generate the ground-truth gold annotations by defining legal or illegal commands under each condition. For example, having more than one “launch new ball” under “when run” is incorrect. Placing “score opponent point” under “when run” is also incorrect. Abiding by this logic, we put down a list of legal and illegal commands for each condition. We note that, we intentionally chose the bounce program as it was amenable to generating gold annotations due to the API that code.org exposed to students. While our methods apply broadly, this gold annotation system will not scale to other assignments. The full annotation schema is in Appendix A.5. 3 RELATED WORK Education feedback The quality of an online education platform depends on the feedback it can provide to its students. Low quality or missing feedback can greatly reduce motivation for students to continue engaging with the exercise (O’Rourke et al., 2014). Currently, platforms like Code.org that offers block-based programming use syntactic analysis to construct hints and feedbacks (Price & Barnes, 2017). The current state-of-the-art introduces a method for providing coding feedback that works for assignments up approximately 10 lines of code (Wu et al., 2019). The method does not easily generalize to more complicated programming languages. Our method sidesteps the complexity of static code analysis and instead focus on analyzing the MDP specified by the game environment. Generalization in RL We are deciding whether an input MDP belongs to a class of MDPs up to some generalization. The generalization represents the creative expressions from the students. A benchmark has been developed to measure trained agent’s ability to generalize to procedually generated unseen settings of a game (Cobbe et al., 2019b;a). Unlike procedually generated environment where the procedure specifies hyperparameters of the environment, our environment is completely determined by the student program. For example, random theme change can happen if the ball hits the wall. We test the state-of-the-art algorithms that focus on image augmentation techniques on our environment (Lee et al., 2019; Laskin et al., 2020). 4 METHOD 4.1 POLICY LEARNING Given an observation st, we first use a convolutional neural network (CNN), the same as the one used in Mnih et al. (2015) as feature extractor over pixel observations. To accumulate multi-step information, such as velocity, we use a Long-short-term Memory Network (LSTM). We construct a one-layer fully connected policy network and value network that takes the last hidden state from LSTM as input. We use Policy Proximal Optimization (PPO), a state-of-the-art on-policy RL algorithm to learn the parameters of our model (Schulman et al., 2017). PPO utilizes an actor-critic style training (Mnih et al., 2016) and learns a policy π(at|st) as well as a value function V π(st) = Eτ∼πθ [∑T t=0 γ trt|s0 = st ] for the policy. For each episode of the agent training, we randomly sample one environment from a fixed set of correct environments in our reference programs: e ∼ E?+. The empirical size of E+ (number of unique correct programs) in our dataset is 107,240. We focus on two types of strategies. One strategy assumes no domain knowledge, which is more realistic. The other strategy assumes adequate representation of possible combinations of visual appearances. Baseline training : |E?+| = 1. We only train on the environment specified by the standard program displayed in Figure 2a. This serves as the baseline. Data augmentation training : |E?+| = 1. This is the domain agnostic strategy where we only include the standard program (representing our lack of knowledge on what visual differences might be in student submitted games). We apply the state-of-the-art RL generalization training algorithm to augment our pixel based observation (Laskin et al., 2020). We adopt the top-performing augmentations (cutout, cutout-color, color-jitter, gray-scale). These augmentations aim to change colors or apply partial occlusion to the visual observations randomly so that the agent is more robust to visual disturbance, which translates to better generalization. Mix-theme training : |E?+| = 8. This is the domain-aware strategy where we include 8 correct environments in our reference environment set, each represents a combination of either “hardcourt” or “retro” theme for paddle, ball, or background. The screenshots of all 8 combinations can be seen in Figure 1. This does not account for dynamics where random theme changes can occur during the game play. However, this does guarantees that the observation state s will always have been seen by the network. 4.2 CLASSIFIER LEARNING We design a classifier that can take inputs from the environment as well as the trained agent. The trajectory τ = (s0, a0, r0, s1, a1, r1, ...) includes both state observations and reward returned by the environment. We build a feature map φ(τ, π) to produce input for the classifier. We want to select features that are most representative of the dynamics and reward model of the MDP that describes the environment. Pixel-based states st has the most amount of information but also the most unstructured. Instead, we choose to focus on total reward and per-step reward trajectory. Total reward A simple feature that can distinguish between different MDPs is the sum of total reward R(τ) = ∑ t rt. Intuitively, incorrect environments could result in an agent not able to get any reward, or extremely high negative or positive reward. Anticipated reward A particular type of error in our setting is called a “reward design” error. An example is displayed in Figure 3(c), where a point was scored when the ball hits the paddle. This type of mistake is much harder to catch with total reward. By observing the relationship between V π(st) and rt, we can build an N-th order Markov model to predict rt, given the previous N-step history of V π(st−n+1), ..., V π(st). If we train this model using the correct reference environments, then r̂ can inform us what the correct reward trajectory is expected by the agent. We can then compute the hamming distance between our predicted reward trajectory r̂ and observed reward trajectory r. p(r0, r1, r2...|v) = p(r0) T∏ t=n p(rt|V π(s<t)) ≈ p(r0) T∏ t=1 p(rt|V π(st−n+1), ..., V π(st)) r̂ = argmax r̂ p(r̂|v) d(r, r̂) = Hamming(r, r̂)/T 0 200 400 600 800 1000 t 10 5 0 5 10 15 20 va lu e Trajectory V(st) rt Figure 4: V π(st) indicates the model’s anticipation of future reward. Code-as-text As a baseline, we also explore a classifier that completely ignores the trained agent. We turn the program text into a count-based 189-feature vector (7 conditions × 27 commands), where each feature represents the number of times the command is repeated under a condition. 5 EXPERIMENT 5.1 TRAINING Policy training We train the agent under different generalization strategy for 6M time steps in total. We use 256-dim hidden state LSTM and trained with 128 steps state history. We train each agent till they can reach maximal reward in their respective training environment. Classifier training We use a fully connected network with 1 hidden layer of size 100 and tanh activation function as the classifier. We use Adam optimizer (Kingma & Ba, 2014) and train for 10,000 iterations till convergence. The classifier is trained with features provided by φ(τ, π). We additionally set a heuristic threshold that if d(r, r̂) < 0.6, we classify the program as having a reward design error. To train the classifier, we sample 16 trajectories by running the agent on the 8 correct and 10 incorrect reference environments. We set the window size of the Markov trajectory prediction model to 5 and train a logistic regression model over pairs of ((V π(st−4), V π(st−3), ..., V π(st)), rt) sampled from the correct reference environments. During evaluation, we vary the number of trajectories we sample (K), and when K > 1, we average the probabilities over K trajectories. 5.2 GRADING PERFORMANCE We evaluate the performance of our classifier over three set of features and show in Table 1. Since we can sample more trajectories from each MDP, we vary the number of samples (K) to show the performance change of different classifiers. When we treat code as text, the representation is fixed, therefore K = 1 for that setting. We set a maximal number of steps to be 1,000 for each trajectory. The frame rate of our game is 50, therefore this correspond to 20 seconds of game play. We terminate and reset the environment after 3 balls have been shot in. When the agent win or lose 3 balls, we give an additional +100 or -100 to mark the winning or losing of the game. We evaluate over the most frequent 8,359 unique programs that covers 50.1% of overall submissions. Since the tail contains many more unique programs, we choose to sample 500 programs uniformly and evaluate our classifier’s performance on them. We can actually see that using the trajectories sampled by the agent, even if we only have 18 labeled reference MDPs for training, we can reach very high accuracy even when we sample very few trajectories. Overall, MDPs on the tail distribution are much harder to classify compared to MDPs from head and body of the distribution. This is perhaps due to the fact the distribution shift that occurs for long-tail classification problems. Overall, when we add reward anticipation as feature into the classifier, we outperform using total reward only. 5.3 GENERALIZATION PERFORMANCE One of our stated goal is for the trained agent π to obtain high expected reward with all correct environments E+, even though π will only be trained with reference environments E?+. We compare different training strategies that allow the agent to generalize to unseen dynamics. In our evaluation, we sample e from the head, body, and tail end of the E+ distribution. Since we only have 51 environments labeled as correct in the head distribution, we evaluate agents on all of them. For the body and tail portion of the distribution, we sample 100 correct environments. The reward scheme is each ball in goal will get +20 reward, and each ball misses paddle gets -10 reward. The game terminates after 5 balls in or missing, making total reward range [-50, 100]. We show the result in Figure 5. Since every single agent has been trained on the reference environment specified by the standard program, they all perform relatively well. We can see some of the data augmentation (except Color Jitter) strategies actually help the agent achieve higher reward on the reference environment. However, when we sample correct environments from the body and tail distribution, every training strategy except “Mixed Theme” suffers significant performance drop. 6 DISCUSSION Visual Features One crucial part of the trajectory is visual features. We have experimented with state-of-the-art video classifier such as R3D (Tran et al., 2018) by sampling a couple of thousand videos from both the reference correct and incorrect environments. This approach did not give us a classifier whose accuracy is beyond random chance. We suspect that video classification might suffer from bad sample efficiency. Also, the difference that separates correct environments and incorrect environments concerns more about relationship reasoning for objects than identifying a few pixels from a single frame (i.e., “the ball going through the goal, disappeared, but never gets launched again” is an error, but this error is not the result of any single frame). Nested objective Our overall objective (Equation 1) is a nested objective, where both the policy and the classifier work collaboratively to minimize the overall loss of the classification. However, in this paper, we took the approach of heuristically defining the optimal criteria for a policy – optimize for expected reward in all correct environments. This is because our policy has orders of magnitude more parameters (LSTM+CNN) than our classifier (one layer FCN). Considering the huge parameter space to search through and the sparsity of signal, the optimization process could be very difficult. However, there are bugs that need to be intentionally triggered, such as the classic sticky action bug in game design, where when the ball hits the paddle through a particular angle, it would appear to have been “stuck” on the paddle and can’t bounce off. This common bug, though not present in our setting, requires collaboration between policy and classifier in order to find out. 7 CONCLUSION We introduce the Play to Grade challenge, where we formulate the problem of interactive coding assignment grading as classifying Markov Decision Processes (MDPs). We propose a simple solution to achieve 94.1% accuracy over 50% of student submissions. Our approach is not specific to the coding assignment of our choice and can scale feedback for real world use.
1. What is the focus of the paper regarding deep reinforcement learning? 2. What are the strengths of the proposed approach, particularly in addressing real-world constraints? 3. What are the limitations of the work, such as scalability issues with the annotation system and handling long-tailed distributions? 4. How does the reviewer assess the paper's overall contribution and progress in automatically grading programming assignments? 5. Are there any suggestions or recommendations for future work to improve the approach and address open questions?
Review
Review The authors identify a relatively novel problem to solve with deep reinforcement learning - automatically grading programs which require dynamic input from the user to judge for implementation correctness. In doing so, they also introduce a new dataset and baseline benchmark which can be used to encourage further work in the area. Some highlights: The work tries to take into account various real-world constraints of the problem. They have realistic expectations for what a teacher would have: one or more correct implementations and one or more incorrect ones. They deal with the possibility that students may choose to be creative and customize the visual aesthetic of the game in a multitude of ways and this must be addressed appropriately. They are methodical about defining the different parts of the evaluation distribution - Head + Body + Tail and the reasoning behind this. They use a few different methodologies for training based on changing the data distribution as well as the reward structure. This is a reasonable sort of ablation study. Some limitations of the work: The work's only example game is 'Bounce' on Code.org. This is one of the most popular games and has an extremely large body of student submissions which give breadth to the training distribution. The programming language is 'block-based' which is extremely structured and distinctly more so than regular imperative programming languages. The annotation system used for Bounce will not scale to other problems. The authors acknowledge that there exists the challenge handling long-tailed distributions in their classification problem. This is probably going to be a significant hurdle for adapting this technique to other program languages/tasks where the tails are even longer. The task of grading is also something that should not have many false negatives (grading as incorrect when it was correct) since that's not good for students. Some other comments: There appears to be a couple of grammar and one spelling-related typo in the paragraph immediately before the start of Section 2.2 Right above section 2, consider changing the definition to E + = E ⋆ + ∪ E ′ + (sorry couldn't figure out how to do the + subscript without markdown treating the underscores as signals for italicizing) In the paragraph right before Section 5.2, it would seem that there were 18 trajectories (rather than the stated 16) if you ran on 8 correct and 10 incorrect reference environments. Overall, considering the three types of comments above, I believe that the paper is an interesting piece of work which makes some rather preliminary progress on a new methodology for automatically grading programming assignments which require dynamic input. However, it leaves some questions up in the air, namely how this approach would scale to different assignments. I would say 'Bounce' represents quite possibly the easiest programming assignment of this category, so it is not the most convincing. I would like to suggest doing the same sort of process for at least a couple of other games, even if they all use the same 'block programming' language from code.org. That would show how much by-hand engineering is required to address the long-tails, the different visual aspects of the game, the length and complexity of episodes, etc. Until then, I will have to say that this is below the threshold for acceptance.
ICLR
Title Play to Grade: Grading Interactive Coding Games as Classifying Markov Decision Process Abstract Contemporary coding education often present students with the task of developing programs that have user interaction and complex dynamic systems, such as mouse based games. While pedagogically compelling, grading such student programs requires dynamic user inputs, therefore they are difficult to grade by unit tests. In this paper we formalize the challenge of grading interactive programs as a task of classifying Markov Decision Processes (MDPs). Each student’s program fully specifies an MDP where the agent needs to operate and decide, under reasonable generalization, if the dynamics and reward model of the input MDP conforms to a set of latent MDPs. We demonstrate that by experiencing a handful of latent MDPs millions of times, we can use the agent to sample trajectories from the input MDP and use a classifier to determine membership. Our method drastically reduces the amount of data needed to train an automatic grading system for interactive code assignments and present a challenge to state-of-the-art reinforcement learning generalization methods. Together with Code.org, we curated a dataset of 700k student submissions, one of the largest dataset of anonymized student submissions to a single assignment. This Code.org assignment had no previous solution for automatically providing correctness feedback to students and as such this contribution could lead to meaningful improvement in educational experience. 1 INTRODUCTION The rise of online coding education platforms accelerates the trend to democratize high quality computer science education for millions of students each year. Corbett (2001) suggests that providing feedback to students can have an enormous impact on efficiently and effectively helping students learn. Unfortunately contemporary coding education has a clear limitation. Students are able to get automatic feedback only up until they start writing interactive programs. When a student authors a program that requires user interaction, e.g. where a user interacts with the student’s program using a mouse, or by clicking on button it becomes exceedingly difficult to grade automatically. Even for well defined challenges, if the user has any creative discretion, or the problem involves any randomness, the task of automatically assessing the work is daunting. Yet creating more open-ended assignments for students can be particularly motivating and engaging, and also help allow students to practice key skills that will be needed in commercial projects. Generating feedback on interactive programs from humans is more laborious than it might seem. Though the most common student solution to an assignment may be submitted many thousands of times, even for introductory computer science education, the probability distribution of homework submissions follows the very heavy tailed Zipf distribution – the statistical distribution of natural language. This makes grading exceptionally hard for contemporary AI (Wu et al., 2019) as well as massive crowd sourced human efforts (Code.org, 2014). While code as text has proved difficult to grade, actually running student code is a promising path forward (Yan et al., 2019). We formulate the grading via playing task as equivalent to classifying whether an ungraded student program – a new Markov Decision Process (MDP) – belongs to a latent class of correct Markov Decision Processes (representing correct programming solutions to the assignment). Given a discrete set of environments E = {en = (Sn,A, Rn, Pn) : n = 1, 2, 3, ...}, we can partition them into E? and E ′. E? is the set of latent MDPs. It includes a handful of reference programs that a teacher has implemented or graded. E ′ is the set of environments specified by student submitted programs. We are building a classifier that determines whether e, a new input decision process is behaviorally identical to the latent decision process. Prior work on providing feedback for code has focused on text-based syntactic analysis and automatically constructing solution space (Rivers & Koedinger, 2013; Ihantola et al., 2015). Such feedback orients around providing hints and unable to determine an interactive program’s correctness. Other intelligent tutoring systems focused on math or other skills that don’t require creating interactive programs (Ruan et al., 2019; 2020). Note that in principle one could analyze the raw code and seek to understand if the code produces a dynamics and reward model that is isomorphic to the dynamics and reward generated by a correct program. However, there are many different ways to express the same correct program and classifying such text might require a large amount of data: as a first approach, we avoid this by instead deploying a policy and observing the resulting program behavior, thereby generating execution traces of the student’s implicitly specified MDP that can be used for classification. Main contributions in this paper: • We introduce the reinforcement learning challenge of Play to Grade. • We propose a baseline algorithm where an agent learns to play a game and use features such as total reward and anticipated reward to determine correctness. • Our classifier obtains 93.1% accuracy on 8359 most frequent programs that cover 50% of the overall submissions and achieve 89.0% accuracy on programs that are submitted by less than 5 times. We gained 14-19% absolute improvement over grading programs via code text. • We will release a dataset of over 700k student submissions to support further research. 2 THE PLAY TO GRADE CHALLENGE We formulate the challenge with constraints that are often found in the real world. Given an interactive coding assignment, teacher often has a few reference implementations of the assignment. Teachers use them to show students what a correct solution should look like. We also assume that the teacher can prepare a few incorrect implementations that represent their “best guesses” of what a wrong program should look like. To formalize this setting, we consider a set of programs, each fully specifies an environment and its dynamics: E = {en = (Sn,A, Rn, Pn) : n = 1, 2, 3, ...}. A subset of these environments are reference environments that are accessible during training, we refer to them as E?, and we also have a set of environments that are specified by student submitted programs E ′. We can further specify a training set D = {(τ i, yi); y ∈ {0, 1}} where τ i ∼ π(e(i)) and e(i) ∼ E?, and a test set Dtest where e(i) ∼ E ′. The overall objective of this challenge is: minL(θ) = min θ min π Ee∼E [Eτ ′∼π(e)[L(pθ(φ(τ ′, π)), y)]] (1) We want a policy that can generate trajectory τ that can help a classifier easily distinguish between an input environment that is correctly implemented and one that is not. We also allow a feature mapping function φ that takes the trajectory and estimations from the agent as intput and output features for the classifier. We can imagine a naive classifier that labels any environment that is playable (defined by being able to obtain rewards) by our agent as correct. A trivial failure case for this classifier would be that if the agent is badly trained and fails to play successfully in a new environment (returning zero reward), we would not know if zero reward indicates the wrongness of the program or the failure of our agent. Generalization challenge In order to avoid the trivial failure case described above – the game states observed are a result of our agent’s failure to play the game, not a result of correctness or wrongness of the program, it is crucial that the agent operates successfully under different correct environments. For any correct environment, E+ = {E?+, E ′+}, the goal is for our agent to obtain the high expected reward. π? = argmax π Ee∼E+ [Eτ∼π(e)[R(τ)]] (2) Additionally, we choose the state space to be the pixel-based screenshot of the game. This assumption imposes the least amount of modification on thousands of games that teaching platforms have created for students over the years. This decision poses a great challenge to our agent. When students create a game, they might choose to express their creativity through a myriad of ways, including but not limited to using exciting background pictures, changing shape or color or moving speed of the game objects, etc. Some of these creative expressions only affect game aesthetics, but other will affect how the game is played (i.e., changing the speed of an object). The agent needs to succeed in these creative settings so that the classifier will not treat creativity as incorrect. 2.1 BOUNCE GAME SIMULATOR We pick the coding game Bounce to be the main game for this challenge. Bounce is a block-based educational game created to help students understand conditionals1. We show actual game scenes in Figure 1, and the coding area in Figure 2a. 1https://studio.code.org/s/course3/stage/15/puzzle/10 The choice gives us three advantages. First, the popularity of this game on Code.org gives us an abundance of real student submissions over the years, allowing us to compare algorithms with real data. Second, a block-based program can be easily represented in a structured format, eliminating the need to write a domain-specific parser for student’s program. Last, in order to measure real progress of this challenge, we need gold labels for each submission. Block-based programming environment allows us to specify a list of legal and illegal commands under each condition which will provide perfect gold labels. The Bounce exercise does not have a bounded solution space, similar to other exercises developed at Code.org. This means that the student can produce arbitrarily long programs, such as repeating the same command multiple times (Figure 3(b)) or changing themes whenever a condition is triggered (Figure 3(a)). These complications can result in very different game dynamics. We created a simulator faithfully executes command under each condition and will return positive reward when “Score point” block is activated, and negative reward when “Socre opponent point” block is activated. In deployment, such simulator needs not be created because coding platforms have already created simulators to run and render student programs. 2.2 CODE.ORG BOUNCE DATASET Code.org is an online computer science education platform that teaches beginner programming. They designed a drag-and-drop interface to teach K-12 students basic programming concepts. Our dataset is compiled from 453,211 students. Each time a student runs their code, the submission is saved. In total, there are 711,274 submissions, where 323,516 unique programs were submitted. Evaluation metric In an unbounded solution space, the distribution of student submissions incur a heavy tail, observed by Wu et al. (2019). We show that the distribution of submission in dataset conforms to a Zipf distribution. This suggests that we can partition this dataset into two sections, as seen in Figure 2b. Head + Body: the 8359 most frequently submitted programs that covers 50.5% of the total submissions (359,266). This set contains 4,084 correct programs (48.9%) and 4,275 incorrect programs (51.1%). Tail: This set represents any programs that are submitted less than 5 times. There are 315,157 unique programs in this set and 290,953 of them (92.3%) were only submitted once. We sample 250 correct and 250 incorrect programs uniformly from this set for evaluation. Reference programs Before looking at the student submitted programs, we attempted to solve the assignment ourselves. Through our attempt, we form an understanding of where the student might make a mistake and what different variations of correct programs could look like. Our process can easily be replicated by teachers. We come up with 8 correct reference programs and 10 incorrect reference programs. This can be regarded as our training data. Gold annotations We generate the ground-truth gold annotations by defining legal or illegal commands under each condition. For example, having more than one “launch new ball” under “when run” is incorrect. Placing “score opponent point” under “when run” is also incorrect. Abiding by this logic, we put down a list of legal and illegal commands for each condition. We note that, we intentionally chose the bounce program as it was amenable to generating gold annotations due to the API that code.org exposed to students. While our methods apply broadly, this gold annotation system will not scale to other assignments. The full annotation schema is in Appendix A.5. 3 RELATED WORK Education feedback The quality of an online education platform depends on the feedback it can provide to its students. Low quality or missing feedback can greatly reduce motivation for students to continue engaging with the exercise (O’Rourke et al., 2014). Currently, platforms like Code.org that offers block-based programming use syntactic analysis to construct hints and feedbacks (Price & Barnes, 2017). The current state-of-the-art introduces a method for providing coding feedback that works for assignments up approximately 10 lines of code (Wu et al., 2019). The method does not easily generalize to more complicated programming languages. Our method sidesteps the complexity of static code analysis and instead focus on analyzing the MDP specified by the game environment. Generalization in RL We are deciding whether an input MDP belongs to a class of MDPs up to some generalization. The generalization represents the creative expressions from the students. A benchmark has been developed to measure trained agent’s ability to generalize to procedually generated unseen settings of a game (Cobbe et al., 2019b;a). Unlike procedually generated environment where the procedure specifies hyperparameters of the environment, our environment is completely determined by the student program. For example, random theme change can happen if the ball hits the wall. We test the state-of-the-art algorithms that focus on image augmentation techniques on our environment (Lee et al., 2019; Laskin et al., 2020). 4 METHOD 4.1 POLICY LEARNING Given an observation st, we first use a convolutional neural network (CNN), the same as the one used in Mnih et al. (2015) as feature extractor over pixel observations. To accumulate multi-step information, such as velocity, we use a Long-short-term Memory Network (LSTM). We construct a one-layer fully connected policy network and value network that takes the last hidden state from LSTM as input. We use Policy Proximal Optimization (PPO), a state-of-the-art on-policy RL algorithm to learn the parameters of our model (Schulman et al., 2017). PPO utilizes an actor-critic style training (Mnih et al., 2016) and learns a policy π(at|st) as well as a value function V π(st) = Eτ∼πθ [∑T t=0 γ trt|s0 = st ] for the policy. For each episode of the agent training, we randomly sample one environment from a fixed set of correct environments in our reference programs: e ∼ E?+. The empirical size of E+ (number of unique correct programs) in our dataset is 107,240. We focus on two types of strategies. One strategy assumes no domain knowledge, which is more realistic. The other strategy assumes adequate representation of possible combinations of visual appearances. Baseline training : |E?+| = 1. We only train on the environment specified by the standard program displayed in Figure 2a. This serves as the baseline. Data augmentation training : |E?+| = 1. This is the domain agnostic strategy where we only include the standard program (representing our lack of knowledge on what visual differences might be in student submitted games). We apply the state-of-the-art RL generalization training algorithm to augment our pixel based observation (Laskin et al., 2020). We adopt the top-performing augmentations (cutout, cutout-color, color-jitter, gray-scale). These augmentations aim to change colors or apply partial occlusion to the visual observations randomly so that the agent is more robust to visual disturbance, which translates to better generalization. Mix-theme training : |E?+| = 8. This is the domain-aware strategy where we include 8 correct environments in our reference environment set, each represents a combination of either “hardcourt” or “retro” theme for paddle, ball, or background. The screenshots of all 8 combinations can be seen in Figure 1. This does not account for dynamics where random theme changes can occur during the game play. However, this does guarantees that the observation state s will always have been seen by the network. 4.2 CLASSIFIER LEARNING We design a classifier that can take inputs from the environment as well as the trained agent. The trajectory τ = (s0, a0, r0, s1, a1, r1, ...) includes both state observations and reward returned by the environment. We build a feature map φ(τ, π) to produce input for the classifier. We want to select features that are most representative of the dynamics and reward model of the MDP that describes the environment. Pixel-based states st has the most amount of information but also the most unstructured. Instead, we choose to focus on total reward and per-step reward trajectory. Total reward A simple feature that can distinguish between different MDPs is the sum of total reward R(τ) = ∑ t rt. Intuitively, incorrect environments could result in an agent not able to get any reward, or extremely high negative or positive reward. Anticipated reward A particular type of error in our setting is called a “reward design” error. An example is displayed in Figure 3(c), where a point was scored when the ball hits the paddle. This type of mistake is much harder to catch with total reward. By observing the relationship between V π(st) and rt, we can build an N-th order Markov model to predict rt, given the previous N-step history of V π(st−n+1), ..., V π(st). If we train this model using the correct reference environments, then r̂ can inform us what the correct reward trajectory is expected by the agent. We can then compute the hamming distance between our predicted reward trajectory r̂ and observed reward trajectory r. p(r0, r1, r2...|v) = p(r0) T∏ t=n p(rt|V π(s<t)) ≈ p(r0) T∏ t=1 p(rt|V π(st−n+1), ..., V π(st)) r̂ = argmax r̂ p(r̂|v) d(r, r̂) = Hamming(r, r̂)/T 0 200 400 600 800 1000 t 10 5 0 5 10 15 20 va lu e Trajectory V(st) rt Figure 4: V π(st) indicates the model’s anticipation of future reward. Code-as-text As a baseline, we also explore a classifier that completely ignores the trained agent. We turn the program text into a count-based 189-feature vector (7 conditions × 27 commands), where each feature represents the number of times the command is repeated under a condition. 5 EXPERIMENT 5.1 TRAINING Policy training We train the agent under different generalization strategy for 6M time steps in total. We use 256-dim hidden state LSTM and trained with 128 steps state history. We train each agent till they can reach maximal reward in their respective training environment. Classifier training We use a fully connected network with 1 hidden layer of size 100 and tanh activation function as the classifier. We use Adam optimizer (Kingma & Ba, 2014) and train for 10,000 iterations till convergence. The classifier is trained with features provided by φ(τ, π). We additionally set a heuristic threshold that if d(r, r̂) < 0.6, we classify the program as having a reward design error. To train the classifier, we sample 16 trajectories by running the agent on the 8 correct and 10 incorrect reference environments. We set the window size of the Markov trajectory prediction model to 5 and train a logistic regression model over pairs of ((V π(st−4), V π(st−3), ..., V π(st)), rt) sampled from the correct reference environments. During evaluation, we vary the number of trajectories we sample (K), and when K > 1, we average the probabilities over K trajectories. 5.2 GRADING PERFORMANCE We evaluate the performance of our classifier over three set of features and show in Table 1. Since we can sample more trajectories from each MDP, we vary the number of samples (K) to show the performance change of different classifiers. When we treat code as text, the representation is fixed, therefore K = 1 for that setting. We set a maximal number of steps to be 1,000 for each trajectory. The frame rate of our game is 50, therefore this correspond to 20 seconds of game play. We terminate and reset the environment after 3 balls have been shot in. When the agent win or lose 3 balls, we give an additional +100 or -100 to mark the winning or losing of the game. We evaluate over the most frequent 8,359 unique programs that covers 50.1% of overall submissions. Since the tail contains many more unique programs, we choose to sample 500 programs uniformly and evaluate our classifier’s performance on them. We can actually see that using the trajectories sampled by the agent, even if we only have 18 labeled reference MDPs for training, we can reach very high accuracy even when we sample very few trajectories. Overall, MDPs on the tail distribution are much harder to classify compared to MDPs from head and body of the distribution. This is perhaps due to the fact the distribution shift that occurs for long-tail classification problems. Overall, when we add reward anticipation as feature into the classifier, we outperform using total reward only. 5.3 GENERALIZATION PERFORMANCE One of our stated goal is for the trained agent π to obtain high expected reward with all correct environments E+, even though π will only be trained with reference environments E?+. We compare different training strategies that allow the agent to generalize to unseen dynamics. In our evaluation, we sample e from the head, body, and tail end of the E+ distribution. Since we only have 51 environments labeled as correct in the head distribution, we evaluate agents on all of them. For the body and tail portion of the distribution, we sample 100 correct environments. The reward scheme is each ball in goal will get +20 reward, and each ball misses paddle gets -10 reward. The game terminates after 5 balls in or missing, making total reward range [-50, 100]. We show the result in Figure 5. Since every single agent has been trained on the reference environment specified by the standard program, they all perform relatively well. We can see some of the data augmentation (except Color Jitter) strategies actually help the agent achieve higher reward on the reference environment. However, when we sample correct environments from the body and tail distribution, every training strategy except “Mixed Theme” suffers significant performance drop. 6 DISCUSSION Visual Features One crucial part of the trajectory is visual features. We have experimented with state-of-the-art video classifier such as R3D (Tran et al., 2018) by sampling a couple of thousand videos from both the reference correct and incorrect environments. This approach did not give us a classifier whose accuracy is beyond random chance. We suspect that video classification might suffer from bad sample efficiency. Also, the difference that separates correct environments and incorrect environments concerns more about relationship reasoning for objects than identifying a few pixels from a single frame (i.e., “the ball going through the goal, disappeared, but never gets launched again” is an error, but this error is not the result of any single frame). Nested objective Our overall objective (Equation 1) is a nested objective, where both the policy and the classifier work collaboratively to minimize the overall loss of the classification. However, in this paper, we took the approach of heuristically defining the optimal criteria for a policy – optimize for expected reward in all correct environments. This is because our policy has orders of magnitude more parameters (LSTM+CNN) than our classifier (one layer FCN). Considering the huge parameter space to search through and the sparsity of signal, the optimization process could be very difficult. However, there are bugs that need to be intentionally triggered, such as the classic sticky action bug in game design, where when the ball hits the paddle through a particular angle, it would appear to have been “stuck” on the paddle and can’t bounce off. This common bug, though not present in our setting, requires collaboration between policy and classifier in order to find out. 7 CONCLUSION We introduce the Play to Grade challenge, where we formulate the problem of interactive coding assignment grading as classifying Markov Decision Processes (MDPs). We propose a simple solution to achieve 94.1% accuracy over 50% of student submissions. Our approach is not specific to the coding assignment of our choice and can scale feedback for real world use.
1. What is the novelty of the paper's approach to distinguishing between good and bad student assignment submissions? 2. What are the concerns regarding the weaknesses of the paper, particularly in terms of unclear details and potential inappropriateness for the problem domain? 3. How does the reviewer assess the evaluation method used in the paper, and what are some suggested natural baselines to consider? 4. Are there any questions or suggestions for the authors regarding their training curriculum, use of sequences of frames, and appropriateness of the solution? 5. Are there any language and formatting issues in the paper that need attention?
Review
Review The authors contribute an approach to automatically distinguish between good and bad student assignment submissions by modeling the assignment submissions as MDPs. The authors hypothesize that satisfactory assignments modeled as MDPs will be more alike than they are to unsatisfactory assignments. Therefore this can potentially be used as part of some kind of future automated feedback system. The authors demonstrate this approach on an assignment for students to recreate a simple pong-like environment. They are able to achieve high accuracy over the most common submissions. The major strength of the paper is in the novelty of the application domain. Reinforcement learning is not typically applied to the education domain in this way. I think there’s certainly potential. In addition, the results do give a positive signal about this research direction. I have a number of concerns in terms of weaknesses of the paper. First, many of the details of the work are unclear in the current paper draft. I am not sure what the architecture for the agent and the classifier was exactly. Further I’m not sure what exactly the training data was for the agent and the classifier. Separately the authors include the numbers 711,274 (the number of submissions), 18 programs, and a single standard program. Finally it is unclear exactly how long the agents trained for, the authors only state that they trained the agents “until they can reach maximal reward”, without a sense of the number of training episodes. I think this approach is very interesting, but I’m concerned about whether it’s an appropriate one for this problem domain. If the main issue is feedback then a RL approach is not going to be particularly helpful given RL’s blackbox nature. Further, since the authors could establish a gold standard baseline automatically it’s unclear why they would need an automated approach that only looked at pixels anyway (though it’s also unclear to me the extent to which the gold standard method used in this paper reflects how an instructor would evaluate this assignment). This is also a very simple environment which is well-suited to being modeled by an MDP (since it is a game), and I’m not convinced that this would scale to more complex environments. Finally, it’s unclear to me why student submissions would need to be evaluated from raw pixels and reward values. This seems like an arbitrary constraint since a CS instructor would have access to the model/MDP/submission directly. I’m not convinced by the evaluation. The authors only compare to a relatively straightforward “code-as-text” baseline. While the authors mention that they attempted an image classifier and found that static images were not suitably discriminative, I don’t see why the authors couldn’t have tried sequences of images. I’d also like to see a more natural baseline, such as a hand-written classifier by a CS instructor or some approximation of this. Something as simple as a KNN classifier could also be worth considering for a baseline. Further, it’s unclear to me what the evaluation domains were, were they the 250 instances taken from the head and tail parts of the dataset? If so, were these all unique instances? I think this work is interesting, but too immature for publication at this time. I’m recommending rejection. Some questions for the authors: -What was the training curriculum used for the agent(s)? Some clarity across the different numbers of training instances would be appreciated. -Did the authors attempt to use sequences of frames in a more straightforward classifier? Or any other way to represent movement/video? Why not, if not? -Is there something I’m missing about why this is an appropriate solution to the problem identified by the authors? The language across the paper is also uneven and some claims are not substantiated. I’d recommend another pass. Some particular points: “teacher often has” -> “a teacher often has” Is it fair to assume teachers can prepare a few incorrect implementations? I’d like to see a citation to back up this claim, since my understanding is the opposite for complex assignments. “a simulator faithfully executes command”-> “a simulator that faithfully executes commands” “and will return positive reward when “Score point” ” -> “ and returns positive rewards when the “Score point” “ “Socre opponent point”-> the “Score opponent point” “such simulator”-> “such a simulator” Code.org didn’t develop this interface they adapted it from Scratch (Maloney et al.) Can you present some evidence that your gold annotation is correct? Why is sample efficiency a concern?
ICLR
Title Play to Grade: Grading Interactive Coding Games as Classifying Markov Decision Process Abstract Contemporary coding education often present students with the task of developing programs that have user interaction and complex dynamic systems, such as mouse based games. While pedagogically compelling, grading such student programs requires dynamic user inputs, therefore they are difficult to grade by unit tests. In this paper we formalize the challenge of grading interactive programs as a task of classifying Markov Decision Processes (MDPs). Each student’s program fully specifies an MDP where the agent needs to operate and decide, under reasonable generalization, if the dynamics and reward model of the input MDP conforms to a set of latent MDPs. We demonstrate that by experiencing a handful of latent MDPs millions of times, we can use the agent to sample trajectories from the input MDP and use a classifier to determine membership. Our method drastically reduces the amount of data needed to train an automatic grading system for interactive code assignments and present a challenge to state-of-the-art reinforcement learning generalization methods. Together with Code.org, we curated a dataset of 700k student submissions, one of the largest dataset of anonymized student submissions to a single assignment. This Code.org assignment had no previous solution for automatically providing correctness feedback to students and as such this contribution could lead to meaningful improvement in educational experience. 1 INTRODUCTION The rise of online coding education platforms accelerates the trend to democratize high quality computer science education for millions of students each year. Corbett (2001) suggests that providing feedback to students can have an enormous impact on efficiently and effectively helping students learn. Unfortunately contemporary coding education has a clear limitation. Students are able to get automatic feedback only up until they start writing interactive programs. When a student authors a program that requires user interaction, e.g. where a user interacts with the student’s program using a mouse, or by clicking on button it becomes exceedingly difficult to grade automatically. Even for well defined challenges, if the user has any creative discretion, or the problem involves any randomness, the task of automatically assessing the work is daunting. Yet creating more open-ended assignments for students can be particularly motivating and engaging, and also help allow students to practice key skills that will be needed in commercial projects. Generating feedback on interactive programs from humans is more laborious than it might seem. Though the most common student solution to an assignment may be submitted many thousands of times, even for introductory computer science education, the probability distribution of homework submissions follows the very heavy tailed Zipf distribution – the statistical distribution of natural language. This makes grading exceptionally hard for contemporary AI (Wu et al., 2019) as well as massive crowd sourced human efforts (Code.org, 2014). While code as text has proved difficult to grade, actually running student code is a promising path forward (Yan et al., 2019). We formulate the grading via playing task as equivalent to classifying whether an ungraded student program – a new Markov Decision Process (MDP) – belongs to a latent class of correct Markov Decision Processes (representing correct programming solutions to the assignment). Given a discrete set of environments E = {en = (Sn,A, Rn, Pn) : n = 1, 2, 3, ...}, we can partition them into E? and E ′. E? is the set of latent MDPs. It includes a handful of reference programs that a teacher has implemented or graded. E ′ is the set of environments specified by student submitted programs. We are building a classifier that determines whether e, a new input decision process is behaviorally identical to the latent decision process. Prior work on providing feedback for code has focused on text-based syntactic analysis and automatically constructing solution space (Rivers & Koedinger, 2013; Ihantola et al., 2015). Such feedback orients around providing hints and unable to determine an interactive program’s correctness. Other intelligent tutoring systems focused on math or other skills that don’t require creating interactive programs (Ruan et al., 2019; 2020). Note that in principle one could analyze the raw code and seek to understand if the code produces a dynamics and reward model that is isomorphic to the dynamics and reward generated by a correct program. However, there are many different ways to express the same correct program and classifying such text might require a large amount of data: as a first approach, we avoid this by instead deploying a policy and observing the resulting program behavior, thereby generating execution traces of the student’s implicitly specified MDP that can be used for classification. Main contributions in this paper: • We introduce the reinforcement learning challenge of Play to Grade. • We propose a baseline algorithm where an agent learns to play a game and use features such as total reward and anticipated reward to determine correctness. • Our classifier obtains 93.1% accuracy on 8359 most frequent programs that cover 50% of the overall submissions and achieve 89.0% accuracy on programs that are submitted by less than 5 times. We gained 14-19% absolute improvement over grading programs via code text. • We will release a dataset of over 700k student submissions to support further research. 2 THE PLAY TO GRADE CHALLENGE We formulate the challenge with constraints that are often found in the real world. Given an interactive coding assignment, teacher often has a few reference implementations of the assignment. Teachers use them to show students what a correct solution should look like. We also assume that the teacher can prepare a few incorrect implementations that represent their “best guesses” of what a wrong program should look like. To formalize this setting, we consider a set of programs, each fully specifies an environment and its dynamics: E = {en = (Sn,A, Rn, Pn) : n = 1, 2, 3, ...}. A subset of these environments are reference environments that are accessible during training, we refer to them as E?, and we also have a set of environments that are specified by student submitted programs E ′. We can further specify a training set D = {(τ i, yi); y ∈ {0, 1}} where τ i ∼ π(e(i)) and e(i) ∼ E?, and a test set Dtest where e(i) ∼ E ′. The overall objective of this challenge is: minL(θ) = min θ min π Ee∼E [Eτ ′∼π(e)[L(pθ(φ(τ ′, π)), y)]] (1) We want a policy that can generate trajectory τ that can help a classifier easily distinguish between an input environment that is correctly implemented and one that is not. We also allow a feature mapping function φ that takes the trajectory and estimations from the agent as intput and output features for the classifier. We can imagine a naive classifier that labels any environment that is playable (defined by being able to obtain rewards) by our agent as correct. A trivial failure case for this classifier would be that if the agent is badly trained and fails to play successfully in a new environment (returning zero reward), we would not know if zero reward indicates the wrongness of the program or the failure of our agent. Generalization challenge In order to avoid the trivial failure case described above – the game states observed are a result of our agent’s failure to play the game, not a result of correctness or wrongness of the program, it is crucial that the agent operates successfully under different correct environments. For any correct environment, E+ = {E?+, E ′+}, the goal is for our agent to obtain the high expected reward. π? = argmax π Ee∼E+ [Eτ∼π(e)[R(τ)]] (2) Additionally, we choose the state space to be the pixel-based screenshot of the game. This assumption imposes the least amount of modification on thousands of games that teaching platforms have created for students over the years. This decision poses a great challenge to our agent. When students create a game, they might choose to express their creativity through a myriad of ways, including but not limited to using exciting background pictures, changing shape or color or moving speed of the game objects, etc. Some of these creative expressions only affect game aesthetics, but other will affect how the game is played (i.e., changing the speed of an object). The agent needs to succeed in these creative settings so that the classifier will not treat creativity as incorrect. 2.1 BOUNCE GAME SIMULATOR We pick the coding game Bounce to be the main game for this challenge. Bounce is a block-based educational game created to help students understand conditionals1. We show actual game scenes in Figure 1, and the coding area in Figure 2a. 1https://studio.code.org/s/course3/stage/15/puzzle/10 The choice gives us three advantages. First, the popularity of this game on Code.org gives us an abundance of real student submissions over the years, allowing us to compare algorithms with real data. Second, a block-based program can be easily represented in a structured format, eliminating the need to write a domain-specific parser for student’s program. Last, in order to measure real progress of this challenge, we need gold labels for each submission. Block-based programming environment allows us to specify a list of legal and illegal commands under each condition which will provide perfect gold labels. The Bounce exercise does not have a bounded solution space, similar to other exercises developed at Code.org. This means that the student can produce arbitrarily long programs, such as repeating the same command multiple times (Figure 3(b)) or changing themes whenever a condition is triggered (Figure 3(a)). These complications can result in very different game dynamics. We created a simulator faithfully executes command under each condition and will return positive reward when “Score point” block is activated, and negative reward when “Socre opponent point” block is activated. In deployment, such simulator needs not be created because coding platforms have already created simulators to run and render student programs. 2.2 CODE.ORG BOUNCE DATASET Code.org is an online computer science education platform that teaches beginner programming. They designed a drag-and-drop interface to teach K-12 students basic programming concepts. Our dataset is compiled from 453,211 students. Each time a student runs their code, the submission is saved. In total, there are 711,274 submissions, where 323,516 unique programs were submitted. Evaluation metric In an unbounded solution space, the distribution of student submissions incur a heavy tail, observed by Wu et al. (2019). We show that the distribution of submission in dataset conforms to a Zipf distribution. This suggests that we can partition this dataset into two sections, as seen in Figure 2b. Head + Body: the 8359 most frequently submitted programs that covers 50.5% of the total submissions (359,266). This set contains 4,084 correct programs (48.9%) and 4,275 incorrect programs (51.1%). Tail: This set represents any programs that are submitted less than 5 times. There are 315,157 unique programs in this set and 290,953 of them (92.3%) were only submitted once. We sample 250 correct and 250 incorrect programs uniformly from this set for evaluation. Reference programs Before looking at the student submitted programs, we attempted to solve the assignment ourselves. Through our attempt, we form an understanding of where the student might make a mistake and what different variations of correct programs could look like. Our process can easily be replicated by teachers. We come up with 8 correct reference programs and 10 incorrect reference programs. This can be regarded as our training data. Gold annotations We generate the ground-truth gold annotations by defining legal or illegal commands under each condition. For example, having more than one “launch new ball” under “when run” is incorrect. Placing “score opponent point” under “when run” is also incorrect. Abiding by this logic, we put down a list of legal and illegal commands for each condition. We note that, we intentionally chose the bounce program as it was amenable to generating gold annotations due to the API that code.org exposed to students. While our methods apply broadly, this gold annotation system will not scale to other assignments. The full annotation schema is in Appendix A.5. 3 RELATED WORK Education feedback The quality of an online education platform depends on the feedback it can provide to its students. Low quality or missing feedback can greatly reduce motivation for students to continue engaging with the exercise (O’Rourke et al., 2014). Currently, platforms like Code.org that offers block-based programming use syntactic analysis to construct hints and feedbacks (Price & Barnes, 2017). The current state-of-the-art introduces a method for providing coding feedback that works for assignments up approximately 10 lines of code (Wu et al., 2019). The method does not easily generalize to more complicated programming languages. Our method sidesteps the complexity of static code analysis and instead focus on analyzing the MDP specified by the game environment. Generalization in RL We are deciding whether an input MDP belongs to a class of MDPs up to some generalization. The generalization represents the creative expressions from the students. A benchmark has been developed to measure trained agent’s ability to generalize to procedually generated unseen settings of a game (Cobbe et al., 2019b;a). Unlike procedually generated environment where the procedure specifies hyperparameters of the environment, our environment is completely determined by the student program. For example, random theme change can happen if the ball hits the wall. We test the state-of-the-art algorithms that focus on image augmentation techniques on our environment (Lee et al., 2019; Laskin et al., 2020). 4 METHOD 4.1 POLICY LEARNING Given an observation st, we first use a convolutional neural network (CNN), the same as the one used in Mnih et al. (2015) as feature extractor over pixel observations. To accumulate multi-step information, such as velocity, we use a Long-short-term Memory Network (LSTM). We construct a one-layer fully connected policy network and value network that takes the last hidden state from LSTM as input. We use Policy Proximal Optimization (PPO), a state-of-the-art on-policy RL algorithm to learn the parameters of our model (Schulman et al., 2017). PPO utilizes an actor-critic style training (Mnih et al., 2016) and learns a policy π(at|st) as well as a value function V π(st) = Eτ∼πθ [∑T t=0 γ trt|s0 = st ] for the policy. For each episode of the agent training, we randomly sample one environment from a fixed set of correct environments in our reference programs: e ∼ E?+. The empirical size of E+ (number of unique correct programs) in our dataset is 107,240. We focus on two types of strategies. One strategy assumes no domain knowledge, which is more realistic. The other strategy assumes adequate representation of possible combinations of visual appearances. Baseline training : |E?+| = 1. We only train on the environment specified by the standard program displayed in Figure 2a. This serves as the baseline. Data augmentation training : |E?+| = 1. This is the domain agnostic strategy where we only include the standard program (representing our lack of knowledge on what visual differences might be in student submitted games). We apply the state-of-the-art RL generalization training algorithm to augment our pixel based observation (Laskin et al., 2020). We adopt the top-performing augmentations (cutout, cutout-color, color-jitter, gray-scale). These augmentations aim to change colors or apply partial occlusion to the visual observations randomly so that the agent is more robust to visual disturbance, which translates to better generalization. Mix-theme training : |E?+| = 8. This is the domain-aware strategy where we include 8 correct environments in our reference environment set, each represents a combination of either “hardcourt” or “retro” theme for paddle, ball, or background. The screenshots of all 8 combinations can be seen in Figure 1. This does not account for dynamics where random theme changes can occur during the game play. However, this does guarantees that the observation state s will always have been seen by the network. 4.2 CLASSIFIER LEARNING We design a classifier that can take inputs from the environment as well as the trained agent. The trajectory τ = (s0, a0, r0, s1, a1, r1, ...) includes both state observations and reward returned by the environment. We build a feature map φ(τ, π) to produce input for the classifier. We want to select features that are most representative of the dynamics and reward model of the MDP that describes the environment. Pixel-based states st has the most amount of information but also the most unstructured. Instead, we choose to focus on total reward and per-step reward trajectory. Total reward A simple feature that can distinguish between different MDPs is the sum of total reward R(τ) = ∑ t rt. Intuitively, incorrect environments could result in an agent not able to get any reward, or extremely high negative or positive reward. Anticipated reward A particular type of error in our setting is called a “reward design” error. An example is displayed in Figure 3(c), where a point was scored when the ball hits the paddle. This type of mistake is much harder to catch with total reward. By observing the relationship between V π(st) and rt, we can build an N-th order Markov model to predict rt, given the previous N-step history of V π(st−n+1), ..., V π(st). If we train this model using the correct reference environments, then r̂ can inform us what the correct reward trajectory is expected by the agent. We can then compute the hamming distance between our predicted reward trajectory r̂ and observed reward trajectory r. p(r0, r1, r2...|v) = p(r0) T∏ t=n p(rt|V π(s<t)) ≈ p(r0) T∏ t=1 p(rt|V π(st−n+1), ..., V π(st)) r̂ = argmax r̂ p(r̂|v) d(r, r̂) = Hamming(r, r̂)/T 0 200 400 600 800 1000 t 10 5 0 5 10 15 20 va lu e Trajectory V(st) rt Figure 4: V π(st) indicates the model’s anticipation of future reward. Code-as-text As a baseline, we also explore a classifier that completely ignores the trained agent. We turn the program text into a count-based 189-feature vector (7 conditions × 27 commands), where each feature represents the number of times the command is repeated under a condition. 5 EXPERIMENT 5.1 TRAINING Policy training We train the agent under different generalization strategy for 6M time steps in total. We use 256-dim hidden state LSTM and trained with 128 steps state history. We train each agent till they can reach maximal reward in their respective training environment. Classifier training We use a fully connected network with 1 hidden layer of size 100 and tanh activation function as the classifier. We use Adam optimizer (Kingma & Ba, 2014) and train for 10,000 iterations till convergence. The classifier is trained with features provided by φ(τ, π). We additionally set a heuristic threshold that if d(r, r̂) < 0.6, we classify the program as having a reward design error. To train the classifier, we sample 16 trajectories by running the agent on the 8 correct and 10 incorrect reference environments. We set the window size of the Markov trajectory prediction model to 5 and train a logistic regression model over pairs of ((V π(st−4), V π(st−3), ..., V π(st)), rt) sampled from the correct reference environments. During evaluation, we vary the number of trajectories we sample (K), and when K > 1, we average the probabilities over K trajectories. 5.2 GRADING PERFORMANCE We evaluate the performance of our classifier over three set of features and show in Table 1. Since we can sample more trajectories from each MDP, we vary the number of samples (K) to show the performance change of different classifiers. When we treat code as text, the representation is fixed, therefore K = 1 for that setting. We set a maximal number of steps to be 1,000 for each trajectory. The frame rate of our game is 50, therefore this correspond to 20 seconds of game play. We terminate and reset the environment after 3 balls have been shot in. When the agent win or lose 3 balls, we give an additional +100 or -100 to mark the winning or losing of the game. We evaluate over the most frequent 8,359 unique programs that covers 50.1% of overall submissions. Since the tail contains many more unique programs, we choose to sample 500 programs uniformly and evaluate our classifier’s performance on them. We can actually see that using the trajectories sampled by the agent, even if we only have 18 labeled reference MDPs for training, we can reach very high accuracy even when we sample very few trajectories. Overall, MDPs on the tail distribution are much harder to classify compared to MDPs from head and body of the distribution. This is perhaps due to the fact the distribution shift that occurs for long-tail classification problems. Overall, when we add reward anticipation as feature into the classifier, we outperform using total reward only. 5.3 GENERALIZATION PERFORMANCE One of our stated goal is for the trained agent π to obtain high expected reward with all correct environments E+, even though π will only be trained with reference environments E?+. We compare different training strategies that allow the agent to generalize to unseen dynamics. In our evaluation, we sample e from the head, body, and tail end of the E+ distribution. Since we only have 51 environments labeled as correct in the head distribution, we evaluate agents on all of them. For the body and tail portion of the distribution, we sample 100 correct environments. The reward scheme is each ball in goal will get +20 reward, and each ball misses paddle gets -10 reward. The game terminates after 5 balls in or missing, making total reward range [-50, 100]. We show the result in Figure 5. Since every single agent has been trained on the reference environment specified by the standard program, they all perform relatively well. We can see some of the data augmentation (except Color Jitter) strategies actually help the agent achieve higher reward on the reference environment. However, when we sample correct environments from the body and tail distribution, every training strategy except “Mixed Theme” suffers significant performance drop. 6 DISCUSSION Visual Features One crucial part of the trajectory is visual features. We have experimented with state-of-the-art video classifier such as R3D (Tran et al., 2018) by sampling a couple of thousand videos from both the reference correct and incorrect environments. This approach did not give us a classifier whose accuracy is beyond random chance. We suspect that video classification might suffer from bad sample efficiency. Also, the difference that separates correct environments and incorrect environments concerns more about relationship reasoning for objects than identifying a few pixels from a single frame (i.e., “the ball going through the goal, disappeared, but never gets launched again” is an error, but this error is not the result of any single frame). Nested objective Our overall objective (Equation 1) is a nested objective, where both the policy and the classifier work collaboratively to minimize the overall loss of the classification. However, in this paper, we took the approach of heuristically defining the optimal criteria for a policy – optimize for expected reward in all correct environments. This is because our policy has orders of magnitude more parameters (LSTM+CNN) than our classifier (one layer FCN). Considering the huge parameter space to search through and the sparsity of signal, the optimization process could be very difficult. However, there are bugs that need to be intentionally triggered, such as the classic sticky action bug in game design, where when the ball hits the paddle through a particular angle, it would appear to have been “stuck” on the paddle and can’t bounce off. This common bug, though not present in our setting, requires collaboration between policy and classifier in order to find out. 7 CONCLUSION We introduce the Play to Grade challenge, where we formulate the problem of interactive coding assignment grading as classifying Markov Decision Processes (MDPs). We propose a simple solution to achieve 94.1% accuracy over 50% of student submissions. Our approach is not specific to the coding assignment of our choice and can scale feedback for real world use.
1. What is the main contribution of the paper regarding coding tasks in schools? 2. What are the strengths and weaknesses of the proposed solution in terms of its ability to coordinate between the policy and the loss function? 3. Why does the reviewer disagree with the idea of using pixel inputs for the grading policy? 4. How does the reviewer suggest improving the paper's writing quality and justification of the proposed heuristic solution? 5. What are the potential extensions of the paper's approach to other domains of interactive programs that could make it a significant contribution to MOOCs grading literature?
Review
Review Quality : Okay Clarity : Poor Originality : Good at problem formulation, but then becomes bad at solution Significance : Could be very significant if done right (not quite there yet). List of Cons : A. Needs improvement on writing: 1) The task of the students was confusing to understand. Specifically, for the first 2 pages I was completely lost on what the student is supposed to be implementing. I had the misconception that the coding task is something like Karel, where the student implement a program that is a policy, and are graded on how well the policy acted in a given game environment. That is not what this paper is about. I think the paper started with something like "Contemporary coding tasks in schools commonly ask the students to implement game environments, for instance, an assignment where the students implement the game PONG, and are graded on whether the resulting game plays like PONG. This types of assignment grading requires interaction as part of a grading process . . . " This would clear up the confusion. 2) Equation 1) is unreadable. What does y = 0 mean? What is \tau ? What is \pi ? They are not explained before or after the equation with enough English sentences to intuitively explain what the loss function is doing. And as a result I cannot decipher this giant loss function that, supposedly when minimized according to \theta – what is \theta parameterizing? the game playing policy? the classifier of whether an environment is correct? – one would have solved the challenge of playing to grade. 3) Equation 2) feels out of place, why all of a sudden we're maximizing reward on an environment? I thought we wanted to have the "most discerning policy". See C.1). The authors should make it clear that 2) is already a heuristic, and justifying why this concession is made up-front instead of putting it only in the discussion when the whole paper has ended: " However, in this paper, we took the approach of heuristically defining the optimal criteria for a policy – optimize for expected reward in all correct environments " B. Needs better motivation of the challenges: 1) I am not convinced that the grading policy should be run on pixel inputs. I think this is an unwarranted challenge. The authors consider changes to the perceptual input such as "random theme change can happen if the ball hits the wall", which seems highly limiting. What would happen if the student add a strobe effect that simply flashes the screen in all possible colors? What would happen if the student flips the world up-side-down? The authors remark that "Pixel-based states has the most amount of information but also the most unstructured." I agree, and I think one can easily resolve this issue by re-structuring the assignment such that the position of the balls and paddle can be easily extracted. C. Needs better justification/explanation of proposed heuristic solution: 1) Rather than training the agent to perform well on a game, one should be training agents that break the game in interesting ways, to discover bugs of the environment. For instance, the agent can first try to move the paddle past the left-side of the screen, one can imagine a faulty solution would allow the paddle to infinitely slide off the screen. This paper does not do this, instead, a heuristic of maximizing the reward on any given environment is proposed as a good proxy, due to the difficulty of jointly optimizing the classifier and the policy, again due to the fact of the authors choosing to use CNN as the policy representation. List of Pros : Done great work at curating the dataset and make it runnable Done great work at specifying the problem statement, and developing a working (albeit misplaced) solution Overall Comment: This paper had great potential. It tackles a class of programs that otherwise would be very difficult to grade, because they require interactions. The problem statement of coordination between the policy and the loss function is beautiful, and it would've been great to see it being solved. However, the paper took a bad turn taking in pixel inputs for no apparent reason, despite owning everything in the simulation stack, from the source code to the rendering engine. This decision ultimately resulted in the paper not able to solve the coordinated optimization task; Resorted to a heuristic objective for the policy; Had to rely on a suite of data augmentation techniques to solve the problem of pixel input, an entirely artificial problem the authors created for themselves. A pixel level input is necessary when one is training RL agents that interacts with the real world where the "game engine" is unknown, but the students are literally implementing and handing the game engine to you to grade. Combined with poor writing quality, I think it is not yet ready for publication. However this is a really good problem, and I would like to see this work being further developed, especially in solving the joint-optimization problem, where the agent can maximize for a "maximally informative trajectory" w.r.t. the classifier. Extending this approach to solve some additional domains of interactive programs would solidify this paper as one of the significant contributions to MOOCs grading line of literature. final recommendations : I maintain my score. I think the other reviewer summarized it best, "interesting but immature". I hope to see this work develop and published in the future.
ICLR
Title Shallow and Deep Networks are Near-Optimal Approximators of Korobov Functions Abstract In this paper, we analyze the number of neurons and training parameters that a neural network needs to approximate multivariate functions of bounded second mixed derivatives — Korobov functions. We prove upper bounds on these quantities for shallow and deep neural networks, drastically lessening the curse of dimensionality. Our bounds hold for general activation functions, including ReLU. We further prove that these bounds nearly match the minimal number of parameters any continuous function approximator needs to approximate Korobov functions, showing that neural networks are near-optimal function approximators. 1 INTRODUCTION Neural networks have known tremendous success in many applications such as computer vision and pattern detection (Krizhevsky et al., 2017; Silver et al., 2016). A natural question is how to explain their practical success theoretically. Neural networks are shown to be universal (Hornik et al., 1989): any Borel-measurable function can be approximated arbitrarily well by a neural network with sufficient number of neurons. Furthermore, universality holds for as low as 1-hidden-layer neural network with reasonable activation functions. However, these results do not specify the needed number of neurons and parameters to train. If these numbers are unreasonably high, the universality of neural networks would not explain their practical success. We are interested in evaluating the number of neurons and training parameters needed to approximate a given function within with a neural network. An interesting question is how do these numbers scale with and the dimensionality of the problem, i.e., the number of variables. Mhaskar (1996) showed that any function of the Sobolev space of order r and dimension d can be approximated within with a 1-layer neural network with O( − dr ) neurons and an infinitely differentiable activation function. This bound exhibits the curse of dimensionality: the number of neurons needed for an −approximation scales exponentially in the dimension of the problem d. Thus, Mhaskar’s bound raises the question of whether this curse is inherent to neural networks. Towards answering this question, DeVore et al. (1989) proved that any continuous function approximator (see Section 5) that approximates all Sobolev functions of order r and dimension d within , needs at least Θ( − d r ) parameters. This result meets Mhaskar’s bound and confirms that neural networks cannot escape the curse of dimensionality for the Sobolev space. A main question is then for which set of functions can neural networks break this curse of dimensionality. One way to circumvent the curse of dimensionality is to restrict considerably the considered space of functions and focus on specific structures adapted to neural networks. For example, Mhaskar et al. (2016) showed that compositional functions with regularity r can be approximated within with deep neural networks with O(d · − 2r ) neurons. Other structural constraints have been considered for compositions of functions (Kohler & Krzyżak, 2016), piecewise smooth functions (Petersen & Voigtlaender, 2018; Imaizumi & Fukumizu, 2019), or structures on the data space, e.g., lying on a manifold (Mhaskar, 2010; Nakada & Imaizumi, 2019; Schmidt-Hieber, 2019). Approximation bounds have also been obtained for the function approximation from data under smoothness constraints (Kohler & Krzyżak, 2005; Kohler & Mehnert, 2011) and specifically on mixed smooth Besov spaces which are known to circumvent the curse of dimensionality (Suzuki, 2018). Another example is the class of Sobolev functions of order d/α and dimension d for which Mhaskar’s bound becomes O( −α). Recently, Montanelli et al. (2019) considered bandlimited functions and showed that they can be approximated within by deep networks with depth O((log 1 ) 2) and O( −2(log 1 ) 2) neurons. Weinan et al. (2019) showed that the closure of the space of 2-layer neural networks with specific regularity (namely a restriction on the size of the network’s weights) is the Barron space. They further show that Barron functions can be approximated within with 2-layer networks with O( −2) neurons. Similar line of work restrict the function space with spectral conditions, to write functions as limits of shallow networks (Barron, 1994; Klusowski & Barron, 2016; 2018). In this work, we are interested in more general and generic spaces of functions. Our space of interest is the space of multivariate functions of bounded second mixed derivatives, the Korobov space. This space is included in the Sobolev space but is reasonably large and general. The Korobov space presents two motivations. First, it is a natural candidate for a large and general space included in the Sobolev space where numerical approximation methods can overcome the curse of dimensionality to some extent (see Section 2.1). Second, Korobov spaces are practically useful for solving partial differential equations (Korobov, 1959) and have been used for high-dimensional function approximation (Zenger & Hackbusch, 1991; Zenger, 1991). Recently, Montanelli & Du (2019) showed that deep neural networks with depth O(log 1 ) and O( − 12 (log 1 ) 3(d−1) 2 +1) neurons can approximate Korobov functions within , lessening the curse of dimensionality for deep neural networks asymptotically in . While they used deep structures to prove their result, the question of whether shallow neural networks also break the curse of dimensionality for the Korobov space remains open. In this paper, we study deep and shallow neural network’s approximation power for the Korobov space and make the following contributions: 1. Representation power of shallow neural networks. We prove that any Korobov function can be approximated within with a 2-layer neural network with ReLU activation function with O( −1(log 1 ) 3(d−1) 2 +1) neurons and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters (Theorem 3.1). We further extend this result to a large class of commonly used activation functions (Theorem 3.4). Asymptotically in , our bound can be written as O( −1−δ) for all δ > 0, and in that sense breaks the curse of dimensionality for shallow neural networks. 2. Representation power of deep neural networks. We show that any function of the Korobov space can be approximated within with a deep neural network of depth dlog2(d)e+ 1 independent of , with non-linear C2 activation function,O( − 12 (log 1 ) 3(d−1) 2 ) neurons andO( − 12 (log 1 ) 3(d−1) 2 ) training parameters (Theorem 4.1). This result improves that of Montanelli & Du (2019) who constructed an approximating neural network with larger depth O(log 1 log d) –increasing with – and larger number of neurons O( − 12 (log 1 ) 3(d−1) 2 +1). However, they used ReLU activation function. 3. Near-optimality of neural networks as function approximators. Under the continuous function approximators model introduced by DeVore et al. (1989), we prove that any continuous function approximator needs Θ( − 1 2 (log 1 ) d−1 2 ) parameters to approximate Korobov functions within (Theorem 5.2). This lower bound nearly matches our established upper bounds on the number of training parameters needed by deep and shallow neural networks to approximate functions of the Korobov space, proving that they are near-optimal function approximators of the Korobov space. Table 1 summarizes our new bounds and existing bounds on shallow and deep neural network approximation power for the Korobov space, Sobolev space and bandlimited functions. Our proofs are constructive and give explicit structures to construct such neural networks with ReLU and general activation functions. Our constructions rely on sparse grid approximation introduced by Zenger (1991), and studied in detail in Bungartz (1992); Bungartz & Griebel (2004). Specifically, we use the sparse grid approach to approximate smooth functions with sums of products then construct neural networks which approximate this structure. A key difficulty is to approximate the product function. In particular in the case of shallow neural networks, we propose, to the best of our knowledge, the first architecture approximating the product function with polynomial number of neurons. To derive our lower bound on the number of parameters needed to approximate the Korobov space, we construct a linear subspace of the Korobov space, with large Bernstein width. This subspace is then used to apply a general lower bound on nonlinear approximation derived by DeVore et al. (1989). The rest of the paper is structured as follows. In Section 2, we formalize our objective and introduce the sparse grids approach. In Section 3 (resp. 4), we prove our bounds on the number of neurons and training parameters for Korobov functions approximation with shallow (resp. deep) networks. Finally, we formalize in Section 5 the notion of optimal continuous function approximators and prove our novel near-optimality result. 2 PRELIMINARIES In this work, we consider feed-forward neural networks, using a linear output neuron and a nonlinear activation function σ : R→ R for the other neurons, such as the popular rectified unit (ReLU) σ(x) = max(x, 0), the sigmoid σ(x) = (1 + e−x)−1 or the Heaviside function σ(x) = 1{x≥0}. Let d ≥ 1 be the dimension of the input. We define a 1-hidden-layer network with N neurons as x 7→ ∑N k=1 ukσ(w > k x+ bk), where wk ∈ Rd, bk ∈ R for i = 1, . . . , N , are parameters. A neural network with several hidden layers is obtained by feeding the outputs of a given layer as inputs to the next layer. We study the expressive power of neural networks, i.e., the ability to approximate a target function f : Rd → R with as few neurons as possible, on the unit hyper-cube Ω := [0, 1]d. Another relevant metric is the number of parameters that need to be trained to approximate the function, i.e., the number of parameters of the approximating network (uk,wk and bk) depending on the function to approximate. We will adopt L∞ norm as a measure of approximation error. We now define some notations necessary to introduce our function spaces of interest. For an integer r, we denote Cr the space of one dimensional functions differentiable r times and with continuous derivatives. In our analysis, we consider functions f with bounded mixed derivatives. For a multi-index α ∈ Nd, the derivative of order α is Dαf := ∂ |α|1f ∂x α1 1 ...∂x αd d , where |α|1 = ∑d i=1 |αi|. Two common function spaces in a compact Ω ⊂ Rd are the Sobolev spaces W r,p(Ω) of functions having weak partial derivatives up to order r in Lp(Ω) and the Korobov spacesXr,p(Ω) of functions vanishing at the boundary and having weak mixed second derivatives up to order r in Lp(Ω). W r,p(Ω) = {f ∈ Lp(Ω) : Dαf ∈ Lp(Ω), |α|1 ≤ r}, Xr,p(Ω) = {f ∈ Lp(Ω) : f |∂Ω = 0, Dαf ∈ Lp(Ω), |α|∞ ≤ r}. where ∂Ω denotes the boundary of Ω, |α|1 = ∑d i=1 |αi| and |α|∞ = supi=1,...,d |αi| are respectively the L1 and infinity norm. Note that Korobov spaces Xr,p(Ω) are subsets of Sobolev spaces W r,p(Ω). For p =∞, the usual norms on these spaces are given by |f |W r,p(Ω) := max |α|1≤r ‖Dαf‖∞ , |f |Xr,p(Ω) := max|α|∞≤r ‖Dαf‖∞ , For simplicity, we will write | · |2,∞ for | · |X2,∞ . We focus our analysis on approximating functions on the Korobov space X2,∞(Ω) for which the curse of dimensionality is drastically lessened and we show that neural networks are near-optimal. Intuitively, a key difference compared to the Sobolev space is that Korobov functions do not have high frequency oscillations in all directions at a time. Such functions may require an exponential number of neurons Telgarsky (2016) and are one of the main difficulties for Sobolev space approximation, which therefore exhibits the curse of dimensionality DeVore et al. (1989). On the contrary, the Korobov space prohibits such behaviour by ensuring that functions can be differentiable on all dimensions together. Further discussions and concrete examples are given in Appendix A. 2.1 THE CURSE OF DIMENSIONALITY We adopt the point of view of asymptotic results in (or equivalently, in the number of neurons), which is a well-established setting in the neural networks representation power literature (Mhaskar, 1996; Bungartz & Griebel, 2004; Yarotsky, 2017; Montanelli & Du, 2019) and numerical analysis literature (Novak, 2006). In the rest of the paper, we use O notation which hide constants in d. For each result, full dependencies in d are provided in appendix. Previous efforts to quantify the number of neurons needed to approximate large general class of functions, showed that neural networks and most classical functional approximation schemes exhibit the curse of dimensionality. For example, for Sobolev functions, Mhaskar proved the following approximation bound. Theorem 2.1 (Mhaskar (1996)). Let p, r ≥ 1, and σ : R → R be an infinitely differentiable activation function, non-polynomial on any interval of R. Let > 0 sufficiently small. For any f ∈ W r,p, there exists a shallow neural network with one hidden layer, activation function σ, and O( − dr ) neurons approximating f within for the infinity norm. Therefore, the approximation of Sobolev functions by neural networks suffers from the curse of dimensionality since the number of neurons needed grows exponentially with the input space dimension d. This curse is not due to poor performance of neural networks but rather to the choice of the Sobolev space. DeVore et al. (1989) proved that any learning algorithm with continuous parameters needs at least Θ( − d r ) parameters to approximate the Sobolev spaceW r,p. This shows that the class of Sobolev functions suffers inherently from the curse of dimensionality and no continuous function approximator can overcome it. We detail this notion later in Section 5. The natural question is whether there exists a reasonable and sufficiently large class of functions for which there is no inherent curse of dimensionality. Instead of the Sobolev space, we aim to add more regularity to overcome the curse of dimensionality while preserving a reasonably large space. The Korobov space X2,∞(Ω)—functions with bounded mixed derivatives—is a natural candidate: it is known in the numerical analysis community as a reasonably large space where numerical approximation methods can lessen the curse of dimensionality (Bungartz & Griebel, 2004). Korobov functions have been introduced for solving partial differential equations (Korobov, 1959; Smolyak, 1963), and have then been used extensively for high-dimensional function approximation (Zenger & Hackbusch, 1991; Bungartz & Griebel, 2004). This space of functions is included in the Sobolev space, but still reasonably large as the regularity condition concerns only second order derivatives. Two questions are of interest. First, how many neurons and training parameters a neural network needs to approximate any Korobov function within in the L∞ norm? Second, how do neural networks perform compared to the optimal theoretical rates for Korobov spaces? 2.2 SPARSE GRIDS AND HIERARCHICAL BASIS In this subsection, we introduce sparse grids which will be key in our neural networks constructions. These were introduced by Zenger (1991) and extensively used for high-dimensional function approximation. We refer to Bungartz & Griebel (2004) for a thorough review of the topic. The goal is to define discrete approximation spaces with basis functions. Instead of a classical uniform grid partition of the hyper-cube [0, 1]d involving nd components, where n is the number of partitions in each coordinate, the sparse grid approach uses a smarter partitioning of the cube preserving the approximation accuracy while drastically reducing the number of components of the grid. The construction involves a 1-dimensional mother function φ which is used to generate all the functions of the basis. For example, a simple choice for the building block φ is the standard hat function φ(x) := (1 − |x|)+. The hat function is not the only possible choice. In the latter proofs we will specify which mother function is used, in our case either the interpolates of Deslauriers & Dubuc (1989) (which we define rigorously later in our proofs) or the hat function φ which can be seen as the Deslaurier-Dubuc interpolet of order 1. These more elaborate mother functions enjoy more smoothness while essentially preserving the approximation power. Assume the mother function has support in [−k, k]. For j = 1, . . . , d, it can be used to generate a set of local functions φlj ,ij : [0, 1] −→ R for all lj ≥ 1 and 1 ≤ ij ≤ 2lj −1 with support [ ij−k 2lj , ij+k 2lj ] as follows, φlj ,ij (x) := φ(2 ljx− ij), x ∈ [0, 1]. We then define a basis of d-dimensional functions by taking the tensor product of these 1-dimensional functions. For all l, i ∈ Nd with l ≥ 1 and 1 ≤ i ≤ 2l − 1 where 2l denotes (2l1 , . . . , 2ld), define φl,i(x) := ∏d j=1 φlj ,ij (xj), x ∈ Rd. For a fixed l ∈ Nd, we will consider the hierarchical increment space Wl which is the subspace spanned by the functions {φl,i : 1 ≤ i ≤ 2l − 1}, as illustrated in Figure 1, Wl := span{φl,i, 1 ≤ i ≤ 2l − 1, ij odd for all 1 ≤ j ≤ d}. Note that in the hierarchical increment Wl, all basis functions have disjoint support. Also, Korobov functions X2,p(Ω) can be expressed uniquely in this hierarchical basis. Precisely, there is a unique representation of u ∈ X2,p(Ω) as u(x) = ∑ l,i vl,iφl,i(x), where the sum is taken over all multiindices l ≥ 1 and 1 ≤ i ≤ 2l − 1 where all components of i are odd. In particular, all basis functions are linearly independent. Notice that this sum is infinite, the objective is now to define a finite-dimensional subspace of X2,p(Ω) that will serve as an approximation space. Sparse grids use a carefully chosen subset of the hierarchical basis functions to construct the approximation space V (1) n := ⊕ |l|1≤n+d−1Wl. When φ is the hat function, Bungartz and Griebel Bungartz & Griebel (2004) showed that this choice of approximating space leads to a good approximating error. Theorem 2.2 (Bungartz & Griebel (2004)). Let f ∈ X2,∞(Ω) and f (1)n be the projection of f on the subspace V (1)n . We have, ‖f−f (1)n ‖∞ = O ( 2−2nnd−1 ) . Furthermore, if vl,i denotes the coefficient of φl,i in the decomposition of f (1) n in V (1) n , then we have the upper bound |vl,i| ≤ 2−d2−2|l|1 |f |2,∞, for all l, i ∈ Nd with |l|1 ≤ n+ d− 1, 1 ≤ i ≤ 2l − 1 where i has odd components. 3 THE REPRESENTATION POWER OF SHALLOW NEURAL NETWORKS It has recently been shown that deep neural networks, with depth scaling with , lessen the curse of dimensionality on the numbers of neuron needed to approximate the Korobov space Montanelli & Du (2019). However, to the best of our knowledge, the question of whether shallow neural networks with fixed universal depth — independent of and d — escape the curse of dimensionality as well for the Korobov space remains open. We settle this question by proving that shallow neural networks also lessen the curse of dimensionality for Korobov space. Theorem 3.1. Let > 0. For all f ∈ X2,∞(Ω), there exists a neural network with 2 layers, ReLU activation, O( −1(log 1 ) 3(d−1) 2 +1) neurons, and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters that approximates f within for the infinity norm. In order to prove Theorem 3.1, we construct the approximating neural network explicitly. The first step is to construct a neural network architecture with two layers and O(d 32 − 12 log 1 ) neurons that approximates the product function p : x ∈ [0, 1]d 7−→ ∏n i=1 xi within for all > 0. Proposition 3.2. For all > 0, there exists a neural network with depth 2, ReLU activation and O(d 32 − 12 log 1 ) neurons, that approximates the product function p : x ∈ [0, 1] d −→ ∏d i=1 xi within for the infinity norm. Sketch of proof The proof builds upon the observation that p(x) = exp( ∑d i=1 log xi). We construct an approximating 2-layer neural network where the first layer approximates log xi for 1 ≤ i ≤ d, and the second layer approximates the exponential. We illustrate the construction in Figure 2. More precisely, fix > 0. Consider the function h : x ∈ [0, 1] 7→ max(log x, log ). We approximate h within d with a piece-wise affine function with O(d 1 2 − 1 2 log 1 ) pieces, then represent this piece-wise affine function with a single layer neural network ĥ with the same number of neurons as the number of pieces (Lemma B.1, Appendix B.1). This 1-layer network then has ‖h − ĥ ‖∞ ≤ d . The first layer of our final network is the union of d copies of ĥ : one for each dimension i, approximating log xi. Similarly, consider the exponential g : x ∈ R− 7→ ex. We construct a 1-layer neural network ĝ with O( − 1 2 log 1 ) neurons with ‖g − ĝ ‖∞ ≤ . This will serve as second layer. Formally, the constructed network p̂ is p̂ = ĝ (∑d i=1 ĥ (xi) ) . This 2-layer neural network has O(d 32 − 12 log 1 ) neurons and verifies ‖p̂ − p‖∞ ≤ . We use this result to prove Theorem 3.1 and show that we can approximate any Korobov function f ∈ X2,∞(Ω) within with a 2-layer neural network of O( − 12 (log 1 ) 3(d−1) 2 ) neurons. Consider the sparse grid construction of the approximating space V (1)n using the standard hat function as mother function to create the hierarchical basis Wl (introduced in Section 2.2). The key idea is to construct a shallow neural network approximating the sparse grid approximation and then use the result of Theorem 2.2 to derive the approximation error. Let f (1)n be the projection of f on the subspace V (1)n defined in Section 2.2. f (1) n can be written as f (1) n (x) = ∑ (l,i)∈U(1)n vl,iφl,i(x), where U (1)n contains the indices (l, i) of basis functions present in V (1) n . We can use Theorem 2.2 and choose n carefully such that f (1)n approximates f within for L∞ norm. The goal is now to approximate f (1)n with a shallow neural network. Note that the basis functions can be written as a product of univariate functions φl,i = ∏d j=1 φlj ,ij . We can therefore use a similar structure to the product approximation of Proposition 3.2 to approximate the basis functions. Specifically, the first layer approximate the d(2n − 1) = O( − 12 (log 1 ) d−1 2 ) terms log φlj ,ij necessary to construct the basis functions of V (1)n and a second layer to approximate the exponential in order to obtain approximations of the O(2nnd−1) = O( − 12 (log 1 ) 3(d−1) 2 ) basis functions of V (1)n . We provide a detailed figure illustrating the construction, Figure 5 in Appendix B.3. The shallow network that we constructed in Theorem 3.1 uses the ReLU activation function. We extend this result to a larger class of activation functions which include commonly used ones. Definition 3.3. A sigmoid-like activation function σ : R → R is a non-decreasing function having finite limits in ±∞. A ReLU-like activation function σ : R → R is a function having a horizontal asymptote in −∞ i.e. σ is bounded in R−, and an affine (non-horizontal) asymptote in +∞, i.e. there exists b > 0 such that σ(x)− bx is bounded in R+. Most common activation functions fall into these classes. Examples of sigmoid-like activations include Heaviside, logistic, tanh, arctan and softsign activations, while ReLU-like activations include ReLU, ISRLU, ELU and soft-plus activations. We extend Theorem 3.1 to all these activations. Theorem 3.4. For any approximation tolerance > 0, and for any f ∈ X2,∞(Ω) there exists a neural network with depth 2 and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters that approximates f within for the infinity norm, with O ( −1 log( 1 ) 3(d−1) 2 +1) (resp. O( − 32 log( 1 ) 3(d−1) 2 )) neurons for a ReLU-like (resp. sigmoid-like) activation. We note that these results can further be extended to more general Korobov spacesXr,p. Indeed, the main dependence of our neural network architectures in the parameters r and p arise from sparse grid approximation. Bungartz & Griebel (2004) show that results similar to Theorem 2.2 can be extended to various values of r, p and different error norms with a similar sparse grid construction. For instance, we can use these results combined with our proposed architecture to show that the Korobov space Xr,∞ can be approximated in infinite norm by neural networks with O( − 1 r (log 1 ) r+1 r (d−1)) training parameters and same number of neurons up to a polynomial factor in . 4 THE REPRESENTATION POWER OF DEEP NEURAL NETWORKS Montanelli & Du (2019) used the sparse grid approach to construct deep neural networks with ReLU activation, approximating Korobov functions with O( − 12 (log 1 ) 3(d−1) 2 +1) neurons, and depth O(log 1 ) for the L ∞ norm. We improve this bound for deep neural networks with C2 nonlinear activation functions. We prove that we only need O( − 12 (log 1 ) 3(d−1) 2 ) neurons and fixed depth, independent of , to approximate the unit ball of the Korobov space within in the L∞ norm. Theorem 4.1. Let σ ∈ C2 be a non-linear activation function. Let > 0. For any function f ∈ X2,∞(Ω), there exists a neural network of depth dlog2 de + 1, with ReLU activation on the first layer and activation function σ for the next layers, O( − 12 (log 1 ) 3(d−1) 2 ) neurons, and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters approximating f within for the infinity norm. Compared to the bound of shallow networks in Theorem 3.1, the number of neurons for deep networks is lower by a factorO( √ ), while the number of training parameters is the same. Hence, deep neural networks are more efficient than shallow neural network in the sense that shallow networks need more “inactive” neurons to reach the same approximation power, but have the same number of parameters. This gap in the number of “inactive” neurons can be consequent in practice, as we may not know exactly which neurons to train and which neurons to fix. This new bound on the number of parameters and neurons matches the approximation power of sparse grids. In fact, sparse grids use Θ( − 1 2 (log 1 ) 3(d−1) 2 ) parameters (weights of basis functions) to approximate Korobov functions within . Our construction in Theorem 4.1 shows that deep neural networks with fixed depth in can fully encode sparse grids approximators. Neural networks are therefore more powerful function approximators. In particular, any sparse grid approximation using O(N( )) parameters, can be represented exactly by a neural network using O(N( )) neurons. The deep approximating network (see Figure 3) has a very similar structure to our construction of an approximating shallow network of Theorem 3.1. The main difference lies in the approximation of the product function. Instead of using a 2-layer neural network, we now use a deep network. The following result shows that deep neural networks can represent exactly the product function. Proposition 4.2 (Lin et al. (2017), Appendix A). Let σ be C2 non linear activation function. For any approximation error > 0, there exists a neural network with dlog2 de hidden layers and activation σ, using at most 8d neurons arranged in a binary tree network that approximates the product function ∏d i=1 xi on [0, 1] d within for the infinity norm. An important remark is that the structure of the constructed neural network is independent of . In particular, the depth and number of neurons is independent of the approximation precision , which we refer to as exact approximation. It is known that an exponential number of neurons is needed in order to exactly approximate the product function with a 1-layer neural network Lin et al. (2017), however, the question of whether one could approximate the product with a shallow network and a polynomial number of neurons, remained open. In Proposition 3.2, we answer positively to this question by constructing an -approximating neural network of depth 2 with ReLU activation and O(d 32 − 12 log 1 ) neurons. Using the same ideas as in Theorem 3.4, we can generalize this result to 2 ) neurons and depth dlog2 de+ 1. obtain an -approximating neural network of depth 2 withO(d 32 − 12 log 1 ) neurons for a ReLU-like activation, or O(d2 −1 log 1 ) neurons for a sigmoid-like activation. 5 NEURAL NETWORKS ARE NEAR-OPTIMAL FUNCTION APPROXIMATORS In the previous sections, we proved upper bounds on the number of neurons and training parameters needed by deep and shallow neural networks to approximate the Korobov space X2,∞(Ω). We now investigate how good is the performance of neural networks as function approximators. We prove a lower bound on the number of parameters needed by any continuous function approximator to approximate the Korobov space. In particular, neural networks, deep and shallow, will nearly match this lower bound, making them near-optimal function approximators. Let us first formalize the notion of continuous function approximators, following the framework of DeVore et al. (1989). For any Banach space X—e.g., a function space—and a subset K ⊂ X of elements to approximate, we define a continuous function approximator with N parameters as a continuous parametrization a : K → RN together with a reconstruction scheme which is a N -dimensional manifold MN : RN → X . For any element f ∈ K, the approximation given isMN (a(f)): the parametrization a is derived continuously from the function f and then given as input to the reconstruction manifold that outputs an approximation function in X . The error of this function approximator is defined as EN,a,MN (K)X := supf∈K |f −MN (a(f))|X . The best function approximator for space K minimizes this error. The minimal error for space K is given by EN (K)X = min a,MN EN,a,MN (K)X . In other terms, a continuous function approximator with N parameters cannot hope to approximate K better than withinEN (K)X . A class of function approximators is a set of function approximators with a given structure. For example, neural networks with continuous parametrizations are a class of function approximators where the number of parameters is the number of training parameters. We say that a class of function approximators is optimal for the space of functions K if it matches this minimal error asymptotically in N , within a constant multiplicative factor. In other words, the number of parameters needed by the class to approximate functions in K within matches asymptotically, within a constant, the least number of parametersN needed to satisfyEN (K)X ≤ . The norm considered in the approximation of the functions of K is the norm associated to the space X . DeVore et al. (1989) showed that this minimum error EN (K)X is lower bounded by the Bernstein width of the subset K ⊂ X , defined as bN (K)X := sup XN+1 sup{ρ : ρU(XN+1) ⊂ K}, where the outer sup is taken over all N + 1 dimensional linear sub-spaces of X , and U(Y ) denotes the unit ball of Y for any linear subspace Y of X . Theorem 5.1 (DeVore et al. (1989)). Let X a Banach space and K ⊂ X . EN (K)X ≥ bN (K)X . We prove a lower bound on the least number of parameters any class of continuous function approximators needs to approximate functions of the Korobov space. Theorem 5.2. Take X = L∞(Ω) and K = {f ∈ X2,∞(Ω) : |f |X2,∞(Ω) ≤ 1} the unit ball of the Korobov space. Then, there exists c > 0 with EN (K)X ≥ cN2 (logN) d−1. Equivalently, for > 0, a continuous function approximator approximating K within in L∞ norm uses at least Θ( − 1 2 (log 1 ) d−1 2 ) parameters. Sketch of proof We seek an appropriate subspaceXN+1 in order to get lower bound the Bernstein width bN (K)X , which in turn provides a lower bound on the approximation error (Theorem 5.1). To do so, we use the Deslaurier-Dubuc interpolet of degree 2, φ(2) (see Figure 6) which is C2. Using the sparse grids approach, we construct a hierarchical basis in X2,∞(Ω) using φ(2) as mother function and define XN+1 as the approximation space V (1) n . Here n is chosen such that the dimension of V (1) n is roughly N + 1. The goal is to estimate sup{ρ : ρU(XN+1) ⊂ K}, which will lead to a bound on bN (K)X . To do so, we upper bound the Korobov norm by the L∞ norm for elements of XN+1. For any function u ∈ XN+1 we can write u = ∑ l,i vl,i ·φl,i. Using a stencil representation of the coefficients vl,i, we are able to obtain an upper bound |u|X2,∞ ≤ Γd‖u‖∞ where Γd = O(22nnd−1). Then, bN (K)X ≥ 1/Γd which yields the desired bound. This lower bound matches within a logarithmic factor the upper bound on the number of training parameters needed by deep and shallow neural networks to approximate the Korobov space within : O( − 12 (log 1 ) 3(d−1) 2 ) (Theorem 3.1 and Theorem 4.1). It exhibits the same exponential dependence in dwith base log 1 and the same main dependence on of − 12 . Note that the upper and lower bound can be rewritten as O( −1/2−δ) for all δ > 0. Moreover, our constructions in Theorem 3.1 and Theorem 4.1 are continuous, which comes directly from the continuity of the sparse grid parameters (see bound on vl,i in Theorem 2.2). Our bounds prove therefore that deep and shallow neural networks are near optimal classes of function approximators for the Korobov space. Interestingly, the subspace XN+1 our proof uses to show the lower bound is essentially the same as the subspace we use to approximate Koborov functions in our proof of the upper bounds (Theorem 3.1 and 4.1). The difference is in the choice of the interpolate φ to construct the basis functions, degree 2 for the former (which provides needed regularity for the proof), and 1 for the later. 6 CONCLUSION AND DISCUSSION We proved new upper and lower bounds on the number of neurons and training parameters needed by shallow and deep neural networks to approximate Korobov functions. Our work shows that shallow and deep networks not only lessen the curse of dimensionality but are also near-optimal. Our work suggests several extensions. First, it would be very interesting to see if our proposed theoretical near-optimal architectures have powerful empirical performance. While commonly used structures (e.g. Convolution Neural Networks, or Recurrent Neural Networks) are motivated by properties of the data such as symmetries, our structures are motivated by theoretical insights on how to optimally approximate a large class of functions with a given number of neurons and parameters. Second, our upper bounds (Theorem 3.1 and 4.1) nearly match our lower bounds (Theorem 5.2) on the least number of training parameters needed to approximate the Korobov space. We wonder if it is possible to close the gap between these bounds and hence prove neural network’s optimality, e.g., one could prove that sparse grids are optimal function approximators by improving our lower bound to match sparse grid number of parameters O( − 12 (log 1 ) 3(d−1) 2 ). Finally, we showed the near-optimality of neural networks among the set of continuous function approximators. It would be interesting to explore lower bounds (analog to Theorem 5.2) when considering larger sets of function approximators, e.g., discontinuous function approximators. Could some discontinuous neural network construction break the curse of dimensionality for the Sobolev space? The question is then whether neural networks are still near-optimal in these larger sets of function approximators. ACKNOWLEDGMENTS The authors are grateful to Tomaso Poggio and the MIT 6.520 course teaching staff for several discussions, remarks and comments that were useful to this work. APPENDIX A ON KOROBOV FUNCTIONS In this section, we further discuss Korobov functions X2,p(Ω). Korobov functions enjoy more smoothness than Sobolev functions: smoothness for X2,p(Ω) is measured in terms of mixed derivatives of order two. Korobov functions X2,p(Ω) can be differentiated twice in each coordinates simultaneously, while Sobolev W 2,p(Ω) functions can only be differentiated twice in total. For example, in two dimensions, for a function f to be Korobov it is required to have ∂f ∂x1 , ∂f ∂x2 , ∂2f ∂2x1 , ∂2f ∂2x2 , ∂2f ∂x1∂x2 , ∂3f ∂2x1∂x2 , ∂3f ∂x1∂2x2 , ∂4f ∂2x1∂2x2 ∈ Lp(Ω), while for f to be Sobolev it requires only ∂f ∂x1 , ∂f ∂x2 , ∂2f ∂2x1 , ∂2f ∂2x2 , ∂2f ∂x1∂x2 ∈ Lp(Ω). The former can be seen from |α|∞ ≤ 2 and the latter from |α|1 ≤ 2 in the definition of Xr,p(Ω) and W r,p(Ω). We now provide intuition on why Korobov functions are easier to approximate. One of the key difficulties in approximating Sobolev functions are possible high frequency oscillations which may require an exponential number of neurons (Telgarsky, 2016). For instance, consider functions which have similar structure to W(n...n) (defined in Subsection 2.2): for any smooth basis function φ with support on the unit cube (see Figure 6 for example), consider the linear function space formed by linear combinations of dilated function φ with support on each cube d-dimensional grid of step 2−n. This corresponds exactly to the construction of W(n...n) which uses the product of hat function on each dimension as basis function φ. This function space can have strong oscillations in all directions at a time. The Korobov space prohibits such behavior by ensuring that functions can be differentiated twice on each dimension, simultaneously. As a result, functions cannot oscillate in all directions at a time without having large Korobov norm. We end this paragraph by comparing the Korobov space to the space of bandlimited functions which was shown to avoid the curse of dimensionality (Montanelli et al., 2019). These are functions for which the support frequency components is restricted to a fixed compact. Intuitively, approximating these functions can be achieved because the set of frequencies is truncated to a compact, which then allows to sample frequencies and obtain approximation guarantees. Instead of imposing a hard constraint of cutting high frequencies, the Korobov space asks for smoothness conditions which do not prohibit high frequencies but rather impose a budget for high frequency oscillations. We precise this idea in the next example. A concrete example of Korobov functions is given by an analogous of the function space V (1)n which we used as approximation space in the proof of our results (see Section 2.2). Similarly to the previous paragraph, one should use a smooth basis function to ensure differentiability. Recall that V (1) n is defined as V (1)n := ⊕ |l|1≤n+d−1 Wl. Intuitively, this approximation space introduces a ”budget” of oscillations for all dimensions through the constraint ∑d i=1 li ≤ n + d − 1. As a result, dilations of the basis function can only occur in a restricted set of directions at a time, which ensures that the Korobov norm stays bounded. B PROOFS OF SECTION 3 B.1 APPROXIMATING THE PRODUCT FUNCTION In this subsection, we construct a neural network architecture with two layers and O(d 32 − 12 log 1 ) neurons that approximates the product function p : x ∈ [0, 1]d 7−→ ∏n i=1 xi within for all > 0, which proves Proposition 3.2. We first prove a simple lemma to represent univariate piece-wise affine functions by shallow neural networks. Lemma B.1. Any one dimensional continuous piece-wise affine function with m pieces is representable exactly by a shallow neural network with ReLU activation, with m neurons on a single layer. Proof. This is a simple consequence from Proposition 1 in Yarotsky (2017). We recall the proof for completeness. Let x1 ≤ · · · ≤ xm−1 be the subdivision of the piece-wise affine function f . We use a neural network of the form g(x) := f(x1) + m−1∑ k=1 wk(x− xk)+ − w0(x1 − x)+, where w0 is the slope of f on the piece ≤ x1, w1 is the slope of f on the piece [x1, x2], wk = f(xk+1)− f(x1)− ∑k−1 i=1 wi(xk+1 − xi) xk+1 − xk , for k = 1, · · · ,m−2, and wm−1 = w̃− ∑m−2 k=1 wk where w̃ is the slope of f on the piece≥ xm−1. Notice that f and g coincide on all xk for 1 ≤ k ≤ m − 1. Furthermore, g has same slope as f on each pieces, therefore, g = f . We can approximate univariate right continuous functions by piece-wise affine functions, and then use Lemma B.1 to represent them by shallow neural networks. The following lemma shows that O( −1) neurons are sufficient to represent an increasing right-continuous function with a shallow neural network. Lemma B.2. Let f : I −→ [c, d] be a right-continuous increasing function where I is an interval, and let > 0. There exists a shallow neural network with ReLU activation, with ⌈ d−c ⌉ neurons on a single layer, that approximates f within for the infinity norm. Proof. Let m = ⌊ d−c ⌋ Define a subdivision of the image interval c ≤ y1 ≤ . . . ≤ ym ≤ d where yk = c+k for k = 1, . . . ,m. Note that this subdivision contains exactly ⌈ d−c ⌉ pieces. Now define a subdivision of I , x1 ≤ x2 ≤ . . . ≤ xm by xk := sup{x ∈ I, f(x) ≤ yk}, for k = 1, . . . ,m. This subdivision stills has ⌈ d−c ⌉ pieces. We now construct our approximation function f̂ on I as the continuous piece-wise affine function on the subdivision x1 ≤ . . . ≤ xm such that f̂(xk) = yk for all 1 ≤ k ≤ m and f̂ is constant before x1 and after xm (see Figure 4). Let x ∈ I . • If x ≤ x1, because f is increasing and right-continuous, c ≤ f(x) ≤ f(x1) ≤ y1 = c+ . Therefore |f(x)− f̂(x)| = |f(x)− (c+ )| ≤ . • If xk < x ≤ xk+1, we have yk < f(x) ≤ f(xk+1) ≤ yk+1. Further note that yk ≤ f̂(x) ≤ yk+1. Therefore |f(x)− f̂(x)| ≤ yk+1 − yk = . • If xm < x, then ym < f(x) ≤ d. Again, |f(x)− f̂(x)| = |f(x)− ym| ≤ d− ym ≤ . Therefore ‖f − f̂‖∞ ≤ . We can now use Lemma B.1 to end the proof. If the function to approximate has some regularity, the number of neurons needed for approximation can be significantly reduced. In the following lemma, we show that O( − 12 ) neurons are sufficient to approximate a C2 univariate function with a shallow neural network. Lemma B.3. Let f : [a, b] −→ [c, d] ∈ C2, and let > 0. There exists a shallow neural network with ReLU activation, with 1√ 2 min( ∫ √ |f ′′|(1 + µ(f, )), (b − a) √ ‖f ′′‖∞) neurons on a single layer, where µ(f, )→ 1 as → 0, that approximates f within for the infinity norm. Proof. See Appendix B.2.1. We will now use the ideas of Lemma B.2 and Lemma B.3 to approximate a truncated log function, which we will use in the construction of our neural network approximating the product. Corollary B.4. Let > 0 sufficiently small and δ > 0. Consider the truncated logarithm function log : [δ, 1] −→ R. There exists a shallow neural network with ReLU activation, with − 12 log 1δ neurons on a single layer, that approximates f within for the infinity norm. Proof. See Appendix B.2.2. We are now ready to construct a neural network approximating the product function and prove Proposition 3.2. The proof builds upon on the observation that ∏d i=1 xi = exp( ∑d i=1 log xi). We construct an approximating 2-layer neural network where the first layer computes log xi for 1 ≤ i ≤ d, and the second layer computes the exponential. We illustrate the construction of the proof in Figure 2. Proof of Proposition 3.2 Fix > 0. Consider the function h : x ∈ [0, 1] 7→ max(log x, log ) ∈ [log , 0]. Using Corollary B.4, there exists a neural network ĥ : [0, 1] −→ [log , 0] with 1 + dd 12 − 12 log 1 e neurons on a single layer such that ‖h − ĥ ‖∞ ≤ d . Indeed, one can take the -approximation of h : x ∈ [ , 1] 7→ log x ∈ [log , 0], then extend this function to [0, ] with a constant equal to log . The resulting piece-wise affine function has one additional segment corresponding to one additional neuron in the approximating function. Similarly, consider the exponential g : x ∈ R− 7→ ex ∈ [0, 1]. Because g is right-continuous increasing, we can use Lemma B.3 to construct a neural network ĝ : R− −→ [0, 1] with 1 + ⌈ 1√ 2 log 1 ⌉ neurons on a single layer such that ‖g − ĝ ‖∞ ≤ . Indeed, again one can take the -approximation of g : x ∈ [log , 0] 7→ ex ∈ [0, 1], then extend this function to (−∞, log ] with a constant equal to . The corresponding neural network has an additional neuron. We construct our final neural network φ̂ (see Figure 2) as φ̂ = ĝ ( d∑ i=1 ĥ (xi) ) . Note that φ̂ can be represented as a 2-layer neural network: the first layer is composed of the union of the 1+dd 12 − 12 log 1 e neurons composing each of the 1-layer neural networks ĥ i : x ∈ [0, 1]d 7→ ĥ (xi) ∈ R for each dimension i ∈ {1, . . . , d}. The second layer is composed of the 1+ ⌈ 1√ 2 log 1 ⌉ neurons of ĝ . Hence, the constructed neural network φ̂ has O(d 3 2 − 1 2 log 1 ) neurons. Let us now analyze the approximation error. Let x ∈ [0, 1]d. For the sake of brevity, denote ŷ = ∑d i=1 ĥ (xi) and y = ∑d i=1 log(xi). We have, |φ̂ (x)− p(x)| ≤ |φ̂ (x)− exp(ŷ)|+ | exp(ŷ)− exp(y)| ≤ + d∏ i=1 xi · | exp(ŷ − y)− 1|, where we used the fact that |φ̂ (x)− exp(ŷ)| = |ĝ (ŷ)− g(ŷ)| ≤ ‖ĝ − g‖∞ ≤ . First suppose that x ≥ . In this case, for all i ∈ {1, . . . , d} we have |ĥ (xi)− log(xi)| = |ĥ (xi)− h (xi)| ≤ d . Then, |ŷ−y| ≤ . Consequently, |φ̂ (x)−p(x)| ≤ +max(|e −1|, |e− −1|) ≤ 3 , for > 0 sufficiently small. Without loss of generality now suppose x1 ≤ . Then ŷ ≤ h (x1) ≤ log , so by definition of ĝ , we have 0 ≤ φ̂ (x) = ĝ (ŷ) ≤ exp(log ) = . Also, 0 ≤ p(x) ≤ so finally |φ̂ (x)− p(x)| ≤ . Remark B.5. Note that using Lemma B.2 instead of Lemma B.3 to construct approximating shallow networks for log and exp would yield approximation functions ĥ withO( ⌈ d log 1 ⌉ ) neurons and ĝ with O( ⌈ 1 ⌉ ) neurons. Therefore, the corresponding neural network would approximate the product p with O(d2 −1 log 1 ) neurons. B.2 MISSING PROOFS OF SECTION B.1 B.2.1 PROOF OF LEMMA B.3 Proof. Similarly as the proof of Lemma B.2, the goal is to approximate f by a piece-wise affine function f̂ defined on a subdivision x0 = a ≤ x1 ≤ . . . ≤ xm ≤ xm+1 = b such that f and f̂ coincide on x0, . . . , xm+1. We first analyse the error induced by a linear approximation of the function on each piece. Let x ∈ [u, v] for u, v ∈ I . Using the mean value theorem, there exists αx ∈ [u, x] such that f(x) − f(u) = f ′(αx)(x − u) and βx ∈ [x, v] such that f(v) − f(x) = f ′(βx)(v − x). Combining these two equalities, we get, f(x)− f(u)− (x− u)f(v)− f(u) v − u = (v − x)(f(x)− f(u))− (x− u)(f(v)− f(x)) v − u = (x− u)(x− v)f ′(βx)− f ′(αx) v − u = (x− u)(v − x) ∫ βx αx f ′′(t)dt v − u Hence, f(x) = f(u) + (x− u)f(v)− f(u) v − u + (x− u)(v − x) ∫ βx αx f ′′(t)dt v − u . (1) We now apply this result to bound the approximation error on each pieces of the subdivision. Let k ∈ [m]. Recall f̂ is linear on the subdivision [xk, xk+1] and f̂(xk) = f(xk) and f̂(xk+1) = f(xk+1). Hence, for all x ∈ [xk, xk+1], f̂(x) = f(xk) + (x− xk) f(xk+1)−f(xk)xk+1−xk . Using Equation equation 1 with u = xk and v = xk+1, we get, ‖f − f̂‖∞,[xk,xk+1] ≤ sup x∈[xk,xk+1] ∣∣∣∣∣(x− xk)(xk+1 − x) ∫ βx αx f ′′(t)dt xk+1 − xk ∣∣∣∣∣ ≤ 1 2 (xk+1 − xk) ∫ xk+1 xk |f ′′(t)|dt ≤ 1 2 (xk+1 − xk)2‖f ′′‖∞,[xk,xk+1]. Therefore, using a regular subdivision with step √ 2 ‖f ′′‖∞ yields an -approximation of f with⌈ (b−a) √ ‖f ′′‖∞√ 2 ⌉ pieces. We now show that for any µ > 0, there exists an -approximation of f with at most ∫ √ |f ′′|√ 2 (1 + µ) pieces. To do so, we use the fact that the upper Riemann sum for √ f ′′ converges to the integral since √ f ′′ is continuous on [a, b]. First define a partition a = X0 ≤ XK = b of [a, b] such that the upper Riemann sum R( √ f ′′) on this subdivision satisfies R( √ f ′′) ≤ (1 + µ/2) ∫ b a √ f ′′. Now define on each interval Ik of the partition a regular subdivision with step √ 2 ‖f ′′‖Ik as before. Finally, consider the subdivision union of all these subdivisions, and construct the approximation f̂ on this final subdivision. By construction, ‖f − f̂‖∞ ≤ because the inequality holds on each piece of the subdivision. Further, the number of pieces is K−1∑ i=0 1 + (Xi+1 −Xi) sup[Xi,Xi+1] √ f ′′ √ 2 = R( √ f ′′)√ 2 +K ≤ ∫ √ |f ′′|√ 2 (1 + µ), for > 0 small enough. Using Lemma B.1 we can complete the proof. and O( −1 ( log 1 ) 3(d−1) 2 +1) neurons on the second layer. B.2.2 PROOF OF COROLLARY B.4 Proof. In view of Lemma B.3, the goal is to show that we can remove the dependence of µ(f, ) in δ. This essentially comes from the fact that the upper Riemann sum behaves well for approximating log. Consider the subdivision x0 := δ ≤ x1 ≤ . . . ≤ xm ≤ xm+1 := 1 with m = ⌊ 1 ̃ log 1 δ ⌋ where ̃ := log(1 + √ 2 ), such that xk = elog δ+k̃, for k = 0, . . . ,m − 1. Denote f̂ the corresponding piece-wise affine approximation. Similarly to the proof of Lemma B.3, for k = 0, . . . ,m− 1, ‖ log−f̂‖∞,[xk,xk+1] ≤ 1 2 (xk+1 − xk)2‖f ′′‖∞,[xk,xk+1] ≤ (ẽ − 1)2 2 ≤ . The proof follows. B.3 PROOF OF THEOREM 3.1: APPROXIMATING THE KOROBOV SPACE X2,∞(Ω) In this subsection, we prove Theorem 3.1 and show that we can approximate any Korobov function f ∈ X2,∞(Ω) within with a 2-layer neural network ofO( − 12 (log 1 ) 3(d−1) 2 ) neurons. We illustrate the construction in Figure 5. Our proof combines the constructed network approximating the product function and a decomposition of f as a sum of separable functions, i.e. a decomposition of the form f(x) ≈ K∑ k=1 d∏ j=1 φ (k) j (xj), ∀x ∈ [0, 1] d. Consider the sparse grid construction of the approximating space V (1)n using the standard hat function as mother function to create the hierarchical basisWl (introduced in Section 2.2). We recall that the approximation space is defined as V (1)n := ⊕ |l|1≤n+d−1Wl. We will construct a neural network approximating the sparse grid approximation and then use the result of Theorem 2.2 to derive the approximation error. Figure 5 gives an illustration of the construction. Let f (1)n be the projection of f on the subspace V (1) n . f (1) n can be written as f (1)n (x) = ∑ (l,i)∈U(1)n vl,iφl,i(x), where U (1)n contains the indices (l, i) of basis functions present in V (1) n i.e. U (1)n := {(l, i), |l|1 ≤ n+ d− 1, 1 ≤ i ≤ 2l − 1, ij odd for all 1 ≤ j ≤ d}. (2) Throughout the proof, we explicitly construct a neural network that uses this decomposition to approximate f (1)n . We then use Theorem 2.2 and choose n carefully such that f (1) n approximates f within for L∞ norm. Note that the basis functions can be written as a product of univariate functions φl,i = ∏d j=1 φlj ,ij . We can therefore use the product approximation of Proposition 3.2 to approximate the basis functions. Specifically, we will use one layer to approximate the terms log φlj ,ij and a second layer to approximate the exponential. We now present in detail the construction of the first layer. First, recall that φlj ,ij is a piece-wise affine function with subdivision 0 ≤ ij−1 2lj ≤ ij 2lj ≤ ij+1 2lj ≤ 1. Define the error term ̃ := 2|f |2,∞ . We consider a symmetric subdivision of the interval [ ij−1+̃ 2lj , ij+1−̃ 2lj ] . We define it as follows: x0 = ij−1+̃ 2lj ≤ x1 ≤ · · · ≤ xm+1 = ij2lj ≤ xm+2 ≤ · · · ≤ x2m+2 = ij+1−̃ 2lj where m =⌊ 1 0 log 1̃ ⌋ and 0 := log(1 + √ 2̃/d), such that xk = ij − 1 + elog ̃+k 0 2lj 0 ≤ k ≤ m, xk = ij + 1− elog ̃+(2m+2−k)k 0 2lj m+ 2 ≤ k ≤ 2m+ 2. Note that with this definition, the terms log(2ljxk − ij) form a regular sequence with step 0. We now construct the piece-wise affine function ĝlj ,ij on the subdivision x0 ≤ · · · ≤ x2m+2 which coincides with log φlj ,ij on x0, · · · , x2m+2 and is constant on [0, x0] and [x2m+2, 1]. By Lemma B.1, this function can be represented by a 1-layer neural network with as much neurons as the number of pieces of ĝ, i.e. at most 2 √ 3d ̃ log 1 ̃ neurons for sufficiently small. A similar proof to that of Corol- lary B.4 shows that ĝ approximates max(log φlj ,ij , log(̃/3)) within ̃/(3d) for the infinity norm. We use this construction to compute in parallel, ̃/(3d)-approximations of max(log φlj ,ij (xj), log ̃) for all 1 ≤ j ≤ d, and 1 ≤ lj ≤ n, 1 ≤ ij ≤ 2lj where ij is odd. These are exactly the 1-dimensional functions that we will need, in order to compute the d-dimensional function basis of the approximation space V (1)n . There are d(2n − 1) such univariate functions, therefore our first layer contains at most 2n+1d √ 3d ̃ log 1 ̃ neurons. We now turn to the second layer. The result of the first two layers will be ̃/3-approximations of φl,i for all (l, i) ∈ U (1)n . Recall that U (1)n contains the indices for the functions forming a basis of the approximation space V (1)n . To do so, for each indexes (l, i) ∈ U (1)n we construct a 1-layer neural network approximating the function exp, which will compute an approximation of exp(ĝl1,i1 +· · ·+ ĝld,id). The approximation of exp is constructed in the same way as for Lemma B.3. Consider a regular subdivision of the interval [log(̃/3), 0] with step √ 2(̃/3), i.e. x0 := log(̃/3) ≤ x1 ≤ · · · ≤ xm ≤ xm+1 = 0 where m = ⌊√ 3 2̃ log 3 ̃ ⌋ , such that xk = log ̃ + k √ 2̃, 0 ≤ k ≤ m. Construct the piece-wise affine function ĥ on the subdivision x0 ≤ · · · ≤ xm+1 which coincides with exp on x0, · · · , xm+1 and is constant on (−∞, x0]. Lemma B.3 shows that ĥ approximates exp on R− within ̃ for the infinity norm. Again, Lemma B.1 gives a representation of ĥ as a 1-layer neural network with as many neurons as pieces in ĥ i.e. 1 + ⌈√ 3 2̃ log 3 ̃ ⌉ . The second layer is the union of 1-layer neural networks approximating exp within ̃/3, for each indexes (l, i) ∈ U (1)n . Therefore, the second layer contains ∣∣∣U (1)n ∣∣∣ (1 + ⌈√ 32̃ log 3̃⌉) neurons. As shown in Bungartz & Griebel (2004),∣∣∣U (1)n ∣∣∣ = n−1∑ i=0 2i· ( d− 1 + i d− 1 ) = (−1)d+2n d−1∑ i=0 ( n+ d− 1 i ) (−2)d−1−i = 2n· ( nd−1 (d− 1)! +O(nd−2) ) . Therefore, the second layer hasO ( 2n n d−1 (d−1)! ̃ − 12 log 1̃ ) neurons. Finally, the output layer computes the weighted sum of the basis functions to approximate f (1)n . Denote by f̂ (1) n the corresponding function of the constructed neural network (see Figure 5), i.e. f̂ (1)n = ∑ (l,i)∈U(1)n vl,i · ĥ d∑ j=1 ĝ(xj) . Let us analyze the approximation error of our neural network. The proof of Proposition 3.2 shows that the output of the two first layers h (∑d j=1 ĝ(·j) ) approximates φl,i within ̃. Therefore, we obtain ‖f (1)n − f̂ (1)n ‖∞ ≤ ̃ ∑ (l,i)∈U(1)n |vl,i|. We now use approximation bounds from Theorem 2.2 on f (1)n . ‖f−f̂ (1)n ‖∞ ≤ ‖f−f (1)n ‖∞+‖f (1)n −f̂ (1)n ‖∞ ≤ 2 · |f |2,∞ 8d ·2−2n ·A(d, n)+ 2|f |2,∞ ∑ (l,i)∈U(1)n |vl,i|, where ∑ (l,i)∈U(1)n |vl,i| ≤ |f |2,∞2−d ∑ i≥0 2−i · ( d− 1 + i d− 1 ) ≤ |f |2,∞. Let us now take n = min { n : 2|f |2,∞ 8d 2−2nA(d, n) ≤ 2 } . Then, using the above inequality shows that the neural network f̂ (1)n approximates f within for the infinity norm. We will now estimate the number of neurons in each layer of this network. Note that n ∼ 1 2 log 2 log 1 and 2n ≤ 4 8 d 2 (2 log 2) d−1 2 (d− 1)! 12 √ |f |2,∞ ( log 1 ) d−1 2 · (1 +O(1)). (3) We can use the above estimates to show that the constructed neural network has at most N1 (resp. N2) neurons on the first (resp. second) layer where N1 ∼ →0 8 √ 6d2 8 d 2 (2 log 2) d−1 2 d! 1 2 · |f |2,∞ ( log 1 ) d+1 2 , N2 ∼ →0 4 √ 3d 3 2 8d/2(2 log 2) 3(d−1) 2 d! 3 2 · |f |2,∞ ( log 1 ) 3(d−1) 2 +1 . This proves the bound the number of neurons. Finally, to prove the bound on the number of training parameters of the network, notice that the only parameters of the network that depend on the function f are the parameters corresponding to the weighs vl,i of the sparse grid decomposition. This number is |U (1)n | = O(2n nd−1 ) = O( − 1 2 (log 1 ) 3(d−1) 2 ). B.4 PROOF OF THEOREM 3.4: GENERALIZATION TO GENERAL ACTIVATION FUNCTIONS We start by formalizing the intuition that a sigmoid-like (resp. ReLU-like) function is a function that resembles the Heaviside (resp. ReLU) function by zooming out along the x (resp. x and y) axis. Lemma B.6. Let σ be a sigmoid-like activation with limit a (resp. b) in −∞ (resp.+∞). For any δ > 0 and error tolerance > 0, there exists a scaling M > 0 such that x 7→ σ(Mx)b−a − a approximates the Heaviside function within outside of (−δ, δ) for the infinity norm. Furthermore, this function has values in [0, 1]. Let σ be a ReLU-like activation with asymptote b · x+ c in +∞. For any δ > 0 and error tolerance > 0, there exists a scaling M > 0 such that x 7→ σ(Mx)Mb approximates the ReLU function within for the infinity norm. Proof. Let δ, > 0 and σ a sigmoid-like activation with limit a (resp. b) in −∞ (resp. +∞). There exists x0 > 0 sufficiently large such that (b−a)|σ(x)−a| ≤ for x ≤ −x0 and (b−a)|σ(x)−b| ≤ for x ≥ x0. It now suffices to take M := x0/δ to obtain the desired result. Now let σ be a ReLU-like activation with oblique asymptote bx in +∞ where b > 0. Let M such that |σ| ≤ Mb for x ≤ 0 and |σ(x) − bx| ≤ Mb for x ≥ 0. One can check that |σ(Mx)Mb | ≤ for x ≤ 0, and |σ(Mx)Mb − x| ≤ for x ≥ 0. Using this approximation, we reduce the analysis of sigmoid-like (resp. ReLU-like) activations to the case of a Heaviside (resp ReLU) activation to prove the desired theorem. Proof of Theorem 3.4 We start by the class of ReLU-like activations. Let σ be a ReLU-like activation function. Lemma B.6 shows that one can approximate arbitrarily well the ReLU activation with a linear map σ. Take the neural network approximator f̂ of a target function f given by Theorem 3.1. At each node, we can add the linear map corresponding to x 7→ σ(Mx)Mb with no additional neuron nor parameter. Because the approximation is continuous, we can take M > 0 arbitrarily large in order to approximate f̂ with arbitrary precision on the compact [0, 1]d. The same argument holds for sigmoid-like activation functions in order to reduce the problem to Heaviside activation functions. Although quadratic approximations for univariate functions similar to Lemma B.3 are not valid for general sigmoid-like activations – in particular the Heaviside — we can obtain an analog to Lemma B.2 as Lemma B.7 given in the Appendix B.4.1. This results is an increased number of neurons. In order to approximate a target function f ∈ X2,∞(Ω), we use the same structure as the neural network constructed for ReLU activations and use the same notations as in the proof of Theorem 3.1. The first difference lies in the approximation of log φlj ,ij in the first layer. Instead of using Corollary B.4, we use Lemma B.7. Therefore, 12d̃ log 3 ̃ neurons are needed to compute a ̃/(3d)-approximation of max(log φlj ,ij , log(̃/3)). The second difference is in the approximation of the exponential in the second layer. Again, we use Lemma B.7 to construct a ̃/3-approximation of the exponential on R− with 6̃ neurons for the second layer. As a result, the first layer contains at most 2n+2 3d 2 ̃ log 1 ̃ neurons for sufficiently small, and the second layer contains ∣∣∣U (1)n ∣∣∣ 6̃ neurons. Using the same estimates as in the proof of Theorem 3.1 shows that the constructed neural network has at mostN1 (resp. N2) neurons on the first (resp. second) layer where N1 ∼ →0 3 · 25 · d5/2 8 d 2 (2 log 2) d−1 2 d! 1 2 |f | 3 2 2,∞ 3 2 ( log 1 ) d+1 2 , N2 ∼ →0 24 · d 32 8 d 2 (2 log 2) 3(d−1) 2 d! 3 2 · |f | 3 2 2,∞ 3 2 ( log 1 ) 3(d−1) 2 . This ends the proof. B.4.1 PROOF OF LEMMA B.7 Lemma B.7. Let σ be a sigmoid-like activation. Let f : I −→ [c, d] be a right-continuous increasing function where I is an interval, and let > 0. There exists a shallow neural network with activation σ, with at most 2d−c neurons on a single layer, that approximates f within for the infinity norm. Proof. The proof is analog to that of Lemma B.2. Let m = bd−c c. We define a regular subdivision of the image interval c ≤ y1 ≤ . . . ≤ ym ≤ d where yk = c + k for k = 1, . . . ,m, then using the monotony of f , we can define a subdivision of I , x1 ≤ . . . ≤ xm such that xk := sup{x ∈ I, f(x) ≤ yk}. Let us first construct an approximation neural network f̂ with the Heaviside activation. Consider f̂(x) := y1 + m−1∑ i=1 1 ( x− xi + xi+1 2 ≥ 0 ) . Let x ∈ [c, d] and k such that x ∈ [xk, xk+1]. We have by monotony yk ≤ f(x) ≤ yk+1 and yk = y1 + (k − 1) ≤ f̂(x) ≤ y1 + k = yk+1. Hence, f̂ approximates f within in infinity norm. Let δ < mini=1,...,m(xi+1−xi)/4 and σ a general sigmoid-like activation with limits a in−∞ and b in +∞. Take M given by Lemma B.7 such that σ(Mx)b−a − a approximates the Heaviside function within 1/m outside of (−δ, δ) and has values in [0, 1]. Using the same arguments as above, the function f̂(x) := y1 + m−1∑ i=1 σ ( Mx−M xi+xi+12 ) b− a − a approximates f within 2 for the infinity norm. The proof follows. C PROOFS OF SECTION 4 C.1 PROOF OF THEOREM 4.1: APPROXIMATING KOROBOV FUNCTIONS WITH DEEP NEURAL NETWORKS Let > 0. We construct a similar structure to the network defined in Theorem 3.1 by using the sparse grid approximation of Subsection 2.2. For a given n, let f (1)n be the projection of f in the approximation space V (1)n (defined in Subsection 2.2) and U (1) n (defined in equation 2) the set of indices (l, i) of basis functions present in V (1)n . Recall f (1) n can be uniquely decomposed as f (1)n (x) = ∑ (l,i)∈U(1)n vl,iφl,i(x). where φl,i = ∏d j=1 φlj ,ij are the basis functions defined in Subsection 2.2. In the first layer, we compute exactly the piece-wise linear hat functions φlj ,ij , then in the next set of layers, we use the product-approximating neural network given by Proposition 4.2 to compute the basis functions φl,i = ∏d j=1 φlj ,ij (see Figure 3). The output layer computes the weighted sum∑ (l,i)∈U(1)n vl,iφl,i(x) and outputs f (1) n . Because the approximation has arbitrary precision, we can chose the network of Proposition 4.2 such that the resulting network f̂ verifies ‖f̂ − f (1)n ‖∞ ≤ /2. More precisely, as φlj ,ij is piece-wise linear with four pieces, we can compute it exactly with four neurons with ReLU activation on a single layer (Lemma B.1). Our first layer is composed of the union of all these ReLU neurons, for the d(2n − 1) indices lj , ij such that 1 ≤ j ≤ d, 1 ≤ lj ≤ n, 1 ≤ ij ≤ 2lj and ij is odd. Therefore, it contains at most d2n+2 neurons with ReLU activation. The second set of layers is composed of the union of product-approximating neural networks to compute φl,i for all (l, i) ∈ U (1)n . This set of layers contains dlog2 de layers with activation σ and at most |U (1)n | · 8d neurons. The output of these two sets of layers is an approximation of the basis functions φl,i with arbitrary precision. Consequently, the final output of the complete neural network is an approximation of f (1)n with arbitrary precision. Similarly to the proof of Theorem 3.1, we can chose the smallest n such that ‖f − f (1)n ‖∞ ≤ /2 (see equation 3 for details). Finally, the network has depth at most log2 d+ 2 and N neurons where N = 8d|U (1)n | ∼ →0 25 · d5/2 8 d 2 (2 log 2) 3(d−1) 2 d! 3 2 · √ |f |2,∞ ( log 1 ) 3(d−1) 2 . The parameters of the network depending on the function are exactly the coefficients vl,i of the sparse grid approximation. Hence, the network has O( − 12 (log 1 ) 3(d−1) 2 ) training parameters. D PROOFS OF SECTION 5 D.1 PROOF OF THEOREM 5.2: NEAR-OPTIMALITY OF NEURAL NETWORKS FOR KOROBOV FUNCTIONS Our goal is to define an appropriate subspace XN+1 in order to get a good lower bound on the Bernstein width bN (K)X , defined in equation 5, which in turn provides a lower bound on the approximation error (Theorem 5.1). To do so, we introduce the Deslaurier-Dubuc interpolet φ
1. What is the focus of the paper regarding neural network approximations? 2. What are the strengths and weaknesses of the proposed approach in terms of theoretical analysis? 3. Are there any concerns or suggestions regarding the extension of the approach to other areas? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. What are the related works that the reviewer suggests the authors should discuss?
Summary Of The Paper Review
Summary Of The Paper The paper proves upper and lower bounds on neural networks for approximating Korobov functions X 2 , ∞ . Upper and lower bounds match for approximation in L ∞ norm sense, and the rate is free of the curse of data dimensionality. The scope of network architectures discussed is extensive, including shallow (2-layer) networks, deep networks with ReLU(-like) activation functions and Sigmoidal activation functions. Review The paper is nicely written, and the theoretical results are well explained. I believe the results are correct and sound, albeit I did not check every detail in the proof. I feel it should be good to discuss whether it is possible to consider higher-order mixed derivatives in Korobov spaces, for example, X 3 , ∞ . On the other hand, extending to bounded L p derivatives with p < ∞ can be interesting. Frankly speaking, I think the extension relies on if there are function approximation guarantees similar to Theorem 2.2 in the paper. Compared to Sobolev spaces, we see the approximation of Korobov functions is free of the curse of data dimensionality. My impression is that there should be some low-dimensional structures in Korobov space allowing the circumvention. The authors explains why Korobov is easier to approximate in Appendix A, while I think some high-level remarks should be placed in Section 2. The authors may want to discuss more related works on efficient approximation theories of neural networks. There are works considering either structures in function spaces, e.g., Besov functions with dominated mixed smoothness, or structures in data space, e.g., manifold data. These works all demonstrate that the size of neural networks in function approximation does not suffer from the curse of data dimensionality. ========================================================================= Thanks for authors' detailed response. The generalization to L p norm guarantees is interesting. I am glad to hold a positive opinion on this paper.
ICLR
Title Shallow and Deep Networks are Near-Optimal Approximators of Korobov Functions Abstract In this paper, we analyze the number of neurons and training parameters that a neural network needs to approximate multivariate functions of bounded second mixed derivatives — Korobov functions. We prove upper bounds on these quantities for shallow and deep neural networks, drastically lessening the curse of dimensionality. Our bounds hold for general activation functions, including ReLU. We further prove that these bounds nearly match the minimal number of parameters any continuous function approximator needs to approximate Korobov functions, showing that neural networks are near-optimal function approximators. 1 INTRODUCTION Neural networks have known tremendous success in many applications such as computer vision and pattern detection (Krizhevsky et al., 2017; Silver et al., 2016). A natural question is how to explain their practical success theoretically. Neural networks are shown to be universal (Hornik et al., 1989): any Borel-measurable function can be approximated arbitrarily well by a neural network with sufficient number of neurons. Furthermore, universality holds for as low as 1-hidden-layer neural network with reasonable activation functions. However, these results do not specify the needed number of neurons and parameters to train. If these numbers are unreasonably high, the universality of neural networks would not explain their practical success. We are interested in evaluating the number of neurons and training parameters needed to approximate a given function within with a neural network. An interesting question is how do these numbers scale with and the dimensionality of the problem, i.e., the number of variables. Mhaskar (1996) showed that any function of the Sobolev space of order r and dimension d can be approximated within with a 1-layer neural network with O( − dr ) neurons and an infinitely differentiable activation function. This bound exhibits the curse of dimensionality: the number of neurons needed for an −approximation scales exponentially in the dimension of the problem d. Thus, Mhaskar’s bound raises the question of whether this curse is inherent to neural networks. Towards answering this question, DeVore et al. (1989) proved that any continuous function approximator (see Section 5) that approximates all Sobolev functions of order r and dimension d within , needs at least Θ( − d r ) parameters. This result meets Mhaskar’s bound and confirms that neural networks cannot escape the curse of dimensionality for the Sobolev space. A main question is then for which set of functions can neural networks break this curse of dimensionality. One way to circumvent the curse of dimensionality is to restrict considerably the considered space of functions and focus on specific structures adapted to neural networks. For example, Mhaskar et al. (2016) showed that compositional functions with regularity r can be approximated within with deep neural networks with O(d · − 2r ) neurons. Other structural constraints have been considered for compositions of functions (Kohler & Krzyżak, 2016), piecewise smooth functions (Petersen & Voigtlaender, 2018; Imaizumi & Fukumizu, 2019), or structures on the data space, e.g., lying on a manifold (Mhaskar, 2010; Nakada & Imaizumi, 2019; Schmidt-Hieber, 2019). Approximation bounds have also been obtained for the function approximation from data under smoothness constraints (Kohler & Krzyżak, 2005; Kohler & Mehnert, 2011) and specifically on mixed smooth Besov spaces which are known to circumvent the curse of dimensionality (Suzuki, 2018). Another example is the class of Sobolev functions of order d/α and dimension d for which Mhaskar’s bound becomes O( −α). Recently, Montanelli et al. (2019) considered bandlimited functions and showed that they can be approximated within by deep networks with depth O((log 1 ) 2) and O( −2(log 1 ) 2) neurons. Weinan et al. (2019) showed that the closure of the space of 2-layer neural networks with specific regularity (namely a restriction on the size of the network’s weights) is the Barron space. They further show that Barron functions can be approximated within with 2-layer networks with O( −2) neurons. Similar line of work restrict the function space with spectral conditions, to write functions as limits of shallow networks (Barron, 1994; Klusowski & Barron, 2016; 2018). In this work, we are interested in more general and generic spaces of functions. Our space of interest is the space of multivariate functions of bounded second mixed derivatives, the Korobov space. This space is included in the Sobolev space but is reasonably large and general. The Korobov space presents two motivations. First, it is a natural candidate for a large and general space included in the Sobolev space where numerical approximation methods can overcome the curse of dimensionality to some extent (see Section 2.1). Second, Korobov spaces are practically useful for solving partial differential equations (Korobov, 1959) and have been used for high-dimensional function approximation (Zenger & Hackbusch, 1991; Zenger, 1991). Recently, Montanelli & Du (2019) showed that deep neural networks with depth O(log 1 ) and O( − 12 (log 1 ) 3(d−1) 2 +1) neurons can approximate Korobov functions within , lessening the curse of dimensionality for deep neural networks asymptotically in . While they used deep structures to prove their result, the question of whether shallow neural networks also break the curse of dimensionality for the Korobov space remains open. In this paper, we study deep and shallow neural network’s approximation power for the Korobov space and make the following contributions: 1. Representation power of shallow neural networks. We prove that any Korobov function can be approximated within with a 2-layer neural network with ReLU activation function with O( −1(log 1 ) 3(d−1) 2 +1) neurons and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters (Theorem 3.1). We further extend this result to a large class of commonly used activation functions (Theorem 3.4). Asymptotically in , our bound can be written as O( −1−δ) for all δ > 0, and in that sense breaks the curse of dimensionality for shallow neural networks. 2. Representation power of deep neural networks. We show that any function of the Korobov space can be approximated within with a deep neural network of depth dlog2(d)e+ 1 independent of , with non-linear C2 activation function,O( − 12 (log 1 ) 3(d−1) 2 ) neurons andO( − 12 (log 1 ) 3(d−1) 2 ) training parameters (Theorem 4.1). This result improves that of Montanelli & Du (2019) who constructed an approximating neural network with larger depth O(log 1 log d) –increasing with – and larger number of neurons O( − 12 (log 1 ) 3(d−1) 2 +1). However, they used ReLU activation function. 3. Near-optimality of neural networks as function approximators. Under the continuous function approximators model introduced by DeVore et al. (1989), we prove that any continuous function approximator needs Θ( − 1 2 (log 1 ) d−1 2 ) parameters to approximate Korobov functions within (Theorem 5.2). This lower bound nearly matches our established upper bounds on the number of training parameters needed by deep and shallow neural networks to approximate functions of the Korobov space, proving that they are near-optimal function approximators of the Korobov space. Table 1 summarizes our new bounds and existing bounds on shallow and deep neural network approximation power for the Korobov space, Sobolev space and bandlimited functions. Our proofs are constructive and give explicit structures to construct such neural networks with ReLU and general activation functions. Our constructions rely on sparse grid approximation introduced by Zenger (1991), and studied in detail in Bungartz (1992); Bungartz & Griebel (2004). Specifically, we use the sparse grid approach to approximate smooth functions with sums of products then construct neural networks which approximate this structure. A key difficulty is to approximate the product function. In particular in the case of shallow neural networks, we propose, to the best of our knowledge, the first architecture approximating the product function with polynomial number of neurons. To derive our lower bound on the number of parameters needed to approximate the Korobov space, we construct a linear subspace of the Korobov space, with large Bernstein width. This subspace is then used to apply a general lower bound on nonlinear approximation derived by DeVore et al. (1989). The rest of the paper is structured as follows. In Section 2, we formalize our objective and introduce the sparse grids approach. In Section 3 (resp. 4), we prove our bounds on the number of neurons and training parameters for Korobov functions approximation with shallow (resp. deep) networks. Finally, we formalize in Section 5 the notion of optimal continuous function approximators and prove our novel near-optimality result. 2 PRELIMINARIES In this work, we consider feed-forward neural networks, using a linear output neuron and a nonlinear activation function σ : R→ R for the other neurons, such as the popular rectified unit (ReLU) σ(x) = max(x, 0), the sigmoid σ(x) = (1 + e−x)−1 or the Heaviside function σ(x) = 1{x≥0}. Let d ≥ 1 be the dimension of the input. We define a 1-hidden-layer network with N neurons as x 7→ ∑N k=1 ukσ(w > k x+ bk), where wk ∈ Rd, bk ∈ R for i = 1, . . . , N , are parameters. A neural network with several hidden layers is obtained by feeding the outputs of a given layer as inputs to the next layer. We study the expressive power of neural networks, i.e., the ability to approximate a target function f : Rd → R with as few neurons as possible, on the unit hyper-cube Ω := [0, 1]d. Another relevant metric is the number of parameters that need to be trained to approximate the function, i.e., the number of parameters of the approximating network (uk,wk and bk) depending on the function to approximate. We will adopt L∞ norm as a measure of approximation error. We now define some notations necessary to introduce our function spaces of interest. For an integer r, we denote Cr the space of one dimensional functions differentiable r times and with continuous derivatives. In our analysis, we consider functions f with bounded mixed derivatives. For a multi-index α ∈ Nd, the derivative of order α is Dαf := ∂ |α|1f ∂x α1 1 ...∂x αd d , where |α|1 = ∑d i=1 |αi|. Two common function spaces in a compact Ω ⊂ Rd are the Sobolev spaces W r,p(Ω) of functions having weak partial derivatives up to order r in Lp(Ω) and the Korobov spacesXr,p(Ω) of functions vanishing at the boundary and having weak mixed second derivatives up to order r in Lp(Ω). W r,p(Ω) = {f ∈ Lp(Ω) : Dαf ∈ Lp(Ω), |α|1 ≤ r}, Xr,p(Ω) = {f ∈ Lp(Ω) : f |∂Ω = 0, Dαf ∈ Lp(Ω), |α|∞ ≤ r}. where ∂Ω denotes the boundary of Ω, |α|1 = ∑d i=1 |αi| and |α|∞ = supi=1,...,d |αi| are respectively the L1 and infinity norm. Note that Korobov spaces Xr,p(Ω) are subsets of Sobolev spaces W r,p(Ω). For p =∞, the usual norms on these spaces are given by |f |W r,p(Ω) := max |α|1≤r ‖Dαf‖∞ , |f |Xr,p(Ω) := max|α|∞≤r ‖Dαf‖∞ , For simplicity, we will write | · |2,∞ for | · |X2,∞ . We focus our analysis on approximating functions on the Korobov space X2,∞(Ω) for which the curse of dimensionality is drastically lessened and we show that neural networks are near-optimal. Intuitively, a key difference compared to the Sobolev space is that Korobov functions do not have high frequency oscillations in all directions at a time. Such functions may require an exponential number of neurons Telgarsky (2016) and are one of the main difficulties for Sobolev space approximation, which therefore exhibits the curse of dimensionality DeVore et al. (1989). On the contrary, the Korobov space prohibits such behaviour by ensuring that functions can be differentiable on all dimensions together. Further discussions and concrete examples are given in Appendix A. 2.1 THE CURSE OF DIMENSIONALITY We adopt the point of view of asymptotic results in (or equivalently, in the number of neurons), which is a well-established setting in the neural networks representation power literature (Mhaskar, 1996; Bungartz & Griebel, 2004; Yarotsky, 2017; Montanelli & Du, 2019) and numerical analysis literature (Novak, 2006). In the rest of the paper, we use O notation which hide constants in d. For each result, full dependencies in d are provided in appendix. Previous efforts to quantify the number of neurons needed to approximate large general class of functions, showed that neural networks and most classical functional approximation schemes exhibit the curse of dimensionality. For example, for Sobolev functions, Mhaskar proved the following approximation bound. Theorem 2.1 (Mhaskar (1996)). Let p, r ≥ 1, and σ : R → R be an infinitely differentiable activation function, non-polynomial on any interval of R. Let > 0 sufficiently small. For any f ∈ W r,p, there exists a shallow neural network with one hidden layer, activation function σ, and O( − dr ) neurons approximating f within for the infinity norm. Therefore, the approximation of Sobolev functions by neural networks suffers from the curse of dimensionality since the number of neurons needed grows exponentially with the input space dimension d. This curse is not due to poor performance of neural networks but rather to the choice of the Sobolev space. DeVore et al. (1989) proved that any learning algorithm with continuous parameters needs at least Θ( − d r ) parameters to approximate the Sobolev spaceW r,p. This shows that the class of Sobolev functions suffers inherently from the curse of dimensionality and no continuous function approximator can overcome it. We detail this notion later in Section 5. The natural question is whether there exists a reasonable and sufficiently large class of functions for which there is no inherent curse of dimensionality. Instead of the Sobolev space, we aim to add more regularity to overcome the curse of dimensionality while preserving a reasonably large space. The Korobov space X2,∞(Ω)—functions with bounded mixed derivatives—is a natural candidate: it is known in the numerical analysis community as a reasonably large space where numerical approximation methods can lessen the curse of dimensionality (Bungartz & Griebel, 2004). Korobov functions have been introduced for solving partial differential equations (Korobov, 1959; Smolyak, 1963), and have then been used extensively for high-dimensional function approximation (Zenger & Hackbusch, 1991; Bungartz & Griebel, 2004). This space of functions is included in the Sobolev space, but still reasonably large as the regularity condition concerns only second order derivatives. Two questions are of interest. First, how many neurons and training parameters a neural network needs to approximate any Korobov function within in the L∞ norm? Second, how do neural networks perform compared to the optimal theoretical rates for Korobov spaces? 2.2 SPARSE GRIDS AND HIERARCHICAL BASIS In this subsection, we introduce sparse grids which will be key in our neural networks constructions. These were introduced by Zenger (1991) and extensively used for high-dimensional function approximation. We refer to Bungartz & Griebel (2004) for a thorough review of the topic. The goal is to define discrete approximation spaces with basis functions. Instead of a classical uniform grid partition of the hyper-cube [0, 1]d involving nd components, where n is the number of partitions in each coordinate, the sparse grid approach uses a smarter partitioning of the cube preserving the approximation accuracy while drastically reducing the number of components of the grid. The construction involves a 1-dimensional mother function φ which is used to generate all the functions of the basis. For example, a simple choice for the building block φ is the standard hat function φ(x) := (1 − |x|)+. The hat function is not the only possible choice. In the latter proofs we will specify which mother function is used, in our case either the interpolates of Deslauriers & Dubuc (1989) (which we define rigorously later in our proofs) or the hat function φ which can be seen as the Deslaurier-Dubuc interpolet of order 1. These more elaborate mother functions enjoy more smoothness while essentially preserving the approximation power. Assume the mother function has support in [−k, k]. For j = 1, . . . , d, it can be used to generate a set of local functions φlj ,ij : [0, 1] −→ R for all lj ≥ 1 and 1 ≤ ij ≤ 2lj −1 with support [ ij−k 2lj , ij+k 2lj ] as follows, φlj ,ij (x) := φ(2 ljx− ij), x ∈ [0, 1]. We then define a basis of d-dimensional functions by taking the tensor product of these 1-dimensional functions. For all l, i ∈ Nd with l ≥ 1 and 1 ≤ i ≤ 2l − 1 where 2l denotes (2l1 , . . . , 2ld), define φl,i(x) := ∏d j=1 φlj ,ij (xj), x ∈ Rd. For a fixed l ∈ Nd, we will consider the hierarchical increment space Wl which is the subspace spanned by the functions {φl,i : 1 ≤ i ≤ 2l − 1}, as illustrated in Figure 1, Wl := span{φl,i, 1 ≤ i ≤ 2l − 1, ij odd for all 1 ≤ j ≤ d}. Note that in the hierarchical increment Wl, all basis functions have disjoint support. Also, Korobov functions X2,p(Ω) can be expressed uniquely in this hierarchical basis. Precisely, there is a unique representation of u ∈ X2,p(Ω) as u(x) = ∑ l,i vl,iφl,i(x), where the sum is taken over all multiindices l ≥ 1 and 1 ≤ i ≤ 2l − 1 where all components of i are odd. In particular, all basis functions are linearly independent. Notice that this sum is infinite, the objective is now to define a finite-dimensional subspace of X2,p(Ω) that will serve as an approximation space. Sparse grids use a carefully chosen subset of the hierarchical basis functions to construct the approximation space V (1) n := ⊕ |l|1≤n+d−1Wl. When φ is the hat function, Bungartz and Griebel Bungartz & Griebel (2004) showed that this choice of approximating space leads to a good approximating error. Theorem 2.2 (Bungartz & Griebel (2004)). Let f ∈ X2,∞(Ω) and f (1)n be the projection of f on the subspace V (1)n . We have, ‖f−f (1)n ‖∞ = O ( 2−2nnd−1 ) . Furthermore, if vl,i denotes the coefficient of φl,i in the decomposition of f (1) n in V (1) n , then we have the upper bound |vl,i| ≤ 2−d2−2|l|1 |f |2,∞, for all l, i ∈ Nd with |l|1 ≤ n+ d− 1, 1 ≤ i ≤ 2l − 1 where i has odd components. 3 THE REPRESENTATION POWER OF SHALLOW NEURAL NETWORKS It has recently been shown that deep neural networks, with depth scaling with , lessen the curse of dimensionality on the numbers of neuron needed to approximate the Korobov space Montanelli & Du (2019). However, to the best of our knowledge, the question of whether shallow neural networks with fixed universal depth — independent of and d — escape the curse of dimensionality as well for the Korobov space remains open. We settle this question by proving that shallow neural networks also lessen the curse of dimensionality for Korobov space. Theorem 3.1. Let > 0. For all f ∈ X2,∞(Ω), there exists a neural network with 2 layers, ReLU activation, O( −1(log 1 ) 3(d−1) 2 +1) neurons, and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters that approximates f within for the infinity norm. In order to prove Theorem 3.1, we construct the approximating neural network explicitly. The first step is to construct a neural network architecture with two layers and O(d 32 − 12 log 1 ) neurons that approximates the product function p : x ∈ [0, 1]d 7−→ ∏n i=1 xi within for all > 0. Proposition 3.2. For all > 0, there exists a neural network with depth 2, ReLU activation and O(d 32 − 12 log 1 ) neurons, that approximates the product function p : x ∈ [0, 1] d −→ ∏d i=1 xi within for the infinity norm. Sketch of proof The proof builds upon the observation that p(x) = exp( ∑d i=1 log xi). We construct an approximating 2-layer neural network where the first layer approximates log xi for 1 ≤ i ≤ d, and the second layer approximates the exponential. We illustrate the construction in Figure 2. More precisely, fix > 0. Consider the function h : x ∈ [0, 1] 7→ max(log x, log ). We approximate h within d with a piece-wise affine function with O(d 1 2 − 1 2 log 1 ) pieces, then represent this piece-wise affine function with a single layer neural network ĥ with the same number of neurons as the number of pieces (Lemma B.1, Appendix B.1). This 1-layer network then has ‖h − ĥ ‖∞ ≤ d . The first layer of our final network is the union of d copies of ĥ : one for each dimension i, approximating log xi. Similarly, consider the exponential g : x ∈ R− 7→ ex. We construct a 1-layer neural network ĝ with O( − 1 2 log 1 ) neurons with ‖g − ĝ ‖∞ ≤ . This will serve as second layer. Formally, the constructed network p̂ is p̂ = ĝ (∑d i=1 ĥ (xi) ) . This 2-layer neural network has O(d 32 − 12 log 1 ) neurons and verifies ‖p̂ − p‖∞ ≤ . We use this result to prove Theorem 3.1 and show that we can approximate any Korobov function f ∈ X2,∞(Ω) within with a 2-layer neural network of O( − 12 (log 1 ) 3(d−1) 2 ) neurons. Consider the sparse grid construction of the approximating space V (1)n using the standard hat function as mother function to create the hierarchical basis Wl (introduced in Section 2.2). The key idea is to construct a shallow neural network approximating the sparse grid approximation and then use the result of Theorem 2.2 to derive the approximation error. Let f (1)n be the projection of f on the subspace V (1)n defined in Section 2.2. f (1) n can be written as f (1) n (x) = ∑ (l,i)∈U(1)n vl,iφl,i(x), where U (1)n contains the indices (l, i) of basis functions present in V (1) n . We can use Theorem 2.2 and choose n carefully such that f (1)n approximates f within for L∞ norm. The goal is now to approximate f (1)n with a shallow neural network. Note that the basis functions can be written as a product of univariate functions φl,i = ∏d j=1 φlj ,ij . We can therefore use a similar structure to the product approximation of Proposition 3.2 to approximate the basis functions. Specifically, the first layer approximate the d(2n − 1) = O( − 12 (log 1 ) d−1 2 ) terms log φlj ,ij necessary to construct the basis functions of V (1)n and a second layer to approximate the exponential in order to obtain approximations of the O(2nnd−1) = O( − 12 (log 1 ) 3(d−1) 2 ) basis functions of V (1)n . We provide a detailed figure illustrating the construction, Figure 5 in Appendix B.3. The shallow network that we constructed in Theorem 3.1 uses the ReLU activation function. We extend this result to a larger class of activation functions which include commonly used ones. Definition 3.3. A sigmoid-like activation function σ : R → R is a non-decreasing function having finite limits in ±∞. A ReLU-like activation function σ : R → R is a function having a horizontal asymptote in −∞ i.e. σ is bounded in R−, and an affine (non-horizontal) asymptote in +∞, i.e. there exists b > 0 such that σ(x)− bx is bounded in R+. Most common activation functions fall into these classes. Examples of sigmoid-like activations include Heaviside, logistic, tanh, arctan and softsign activations, while ReLU-like activations include ReLU, ISRLU, ELU and soft-plus activations. We extend Theorem 3.1 to all these activations. Theorem 3.4. For any approximation tolerance > 0, and for any f ∈ X2,∞(Ω) there exists a neural network with depth 2 and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters that approximates f within for the infinity norm, with O ( −1 log( 1 ) 3(d−1) 2 +1) (resp. O( − 32 log( 1 ) 3(d−1) 2 )) neurons for a ReLU-like (resp. sigmoid-like) activation. We note that these results can further be extended to more general Korobov spacesXr,p. Indeed, the main dependence of our neural network architectures in the parameters r and p arise from sparse grid approximation. Bungartz & Griebel (2004) show that results similar to Theorem 2.2 can be extended to various values of r, p and different error norms with a similar sparse grid construction. For instance, we can use these results combined with our proposed architecture to show that the Korobov space Xr,∞ can be approximated in infinite norm by neural networks with O( − 1 r (log 1 ) r+1 r (d−1)) training parameters and same number of neurons up to a polynomial factor in . 4 THE REPRESENTATION POWER OF DEEP NEURAL NETWORKS Montanelli & Du (2019) used the sparse grid approach to construct deep neural networks with ReLU activation, approximating Korobov functions with O( − 12 (log 1 ) 3(d−1) 2 +1) neurons, and depth O(log 1 ) for the L ∞ norm. We improve this bound for deep neural networks with C2 nonlinear activation functions. We prove that we only need O( − 12 (log 1 ) 3(d−1) 2 ) neurons and fixed depth, independent of , to approximate the unit ball of the Korobov space within in the L∞ norm. Theorem 4.1. Let σ ∈ C2 be a non-linear activation function. Let > 0. For any function f ∈ X2,∞(Ω), there exists a neural network of depth dlog2 de + 1, with ReLU activation on the first layer and activation function σ for the next layers, O( − 12 (log 1 ) 3(d−1) 2 ) neurons, and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters approximating f within for the infinity norm. Compared to the bound of shallow networks in Theorem 3.1, the number of neurons for deep networks is lower by a factorO( √ ), while the number of training parameters is the same. Hence, deep neural networks are more efficient than shallow neural network in the sense that shallow networks need more “inactive” neurons to reach the same approximation power, but have the same number of parameters. This gap in the number of “inactive” neurons can be consequent in practice, as we may not know exactly which neurons to train and which neurons to fix. This new bound on the number of parameters and neurons matches the approximation power of sparse grids. In fact, sparse grids use Θ( − 1 2 (log 1 ) 3(d−1) 2 ) parameters (weights of basis functions) to approximate Korobov functions within . Our construction in Theorem 4.1 shows that deep neural networks with fixed depth in can fully encode sparse grids approximators. Neural networks are therefore more powerful function approximators. In particular, any sparse grid approximation using O(N( )) parameters, can be represented exactly by a neural network using O(N( )) neurons. The deep approximating network (see Figure 3) has a very similar structure to our construction of an approximating shallow network of Theorem 3.1. The main difference lies in the approximation of the product function. Instead of using a 2-layer neural network, we now use a deep network. The following result shows that deep neural networks can represent exactly the product function. Proposition 4.2 (Lin et al. (2017), Appendix A). Let σ be C2 non linear activation function. For any approximation error > 0, there exists a neural network with dlog2 de hidden layers and activation σ, using at most 8d neurons arranged in a binary tree network that approximates the product function ∏d i=1 xi on [0, 1] d within for the infinity norm. An important remark is that the structure of the constructed neural network is independent of . In particular, the depth and number of neurons is independent of the approximation precision , which we refer to as exact approximation. It is known that an exponential number of neurons is needed in order to exactly approximate the product function with a 1-layer neural network Lin et al. (2017), however, the question of whether one could approximate the product with a shallow network and a polynomial number of neurons, remained open. In Proposition 3.2, we answer positively to this question by constructing an -approximating neural network of depth 2 with ReLU activation and O(d 32 − 12 log 1 ) neurons. Using the same ideas as in Theorem 3.4, we can generalize this result to 2 ) neurons and depth dlog2 de+ 1. obtain an -approximating neural network of depth 2 withO(d 32 − 12 log 1 ) neurons for a ReLU-like activation, or O(d2 −1 log 1 ) neurons for a sigmoid-like activation. 5 NEURAL NETWORKS ARE NEAR-OPTIMAL FUNCTION APPROXIMATORS In the previous sections, we proved upper bounds on the number of neurons and training parameters needed by deep and shallow neural networks to approximate the Korobov space X2,∞(Ω). We now investigate how good is the performance of neural networks as function approximators. We prove a lower bound on the number of parameters needed by any continuous function approximator to approximate the Korobov space. In particular, neural networks, deep and shallow, will nearly match this lower bound, making them near-optimal function approximators. Let us first formalize the notion of continuous function approximators, following the framework of DeVore et al. (1989). For any Banach space X—e.g., a function space—and a subset K ⊂ X of elements to approximate, we define a continuous function approximator with N parameters as a continuous parametrization a : K → RN together with a reconstruction scheme which is a N -dimensional manifold MN : RN → X . For any element f ∈ K, the approximation given isMN (a(f)): the parametrization a is derived continuously from the function f and then given as input to the reconstruction manifold that outputs an approximation function in X . The error of this function approximator is defined as EN,a,MN (K)X := supf∈K |f −MN (a(f))|X . The best function approximator for space K minimizes this error. The minimal error for space K is given by EN (K)X = min a,MN EN,a,MN (K)X . In other terms, a continuous function approximator with N parameters cannot hope to approximate K better than withinEN (K)X . A class of function approximators is a set of function approximators with a given structure. For example, neural networks with continuous parametrizations are a class of function approximators where the number of parameters is the number of training parameters. We say that a class of function approximators is optimal for the space of functions K if it matches this minimal error asymptotically in N , within a constant multiplicative factor. In other words, the number of parameters needed by the class to approximate functions in K within matches asymptotically, within a constant, the least number of parametersN needed to satisfyEN (K)X ≤ . The norm considered in the approximation of the functions of K is the norm associated to the space X . DeVore et al. (1989) showed that this minimum error EN (K)X is lower bounded by the Bernstein width of the subset K ⊂ X , defined as bN (K)X := sup XN+1 sup{ρ : ρU(XN+1) ⊂ K}, where the outer sup is taken over all N + 1 dimensional linear sub-spaces of X , and U(Y ) denotes the unit ball of Y for any linear subspace Y of X . Theorem 5.1 (DeVore et al. (1989)). Let X a Banach space and K ⊂ X . EN (K)X ≥ bN (K)X . We prove a lower bound on the least number of parameters any class of continuous function approximators needs to approximate functions of the Korobov space. Theorem 5.2. Take X = L∞(Ω) and K = {f ∈ X2,∞(Ω) : |f |X2,∞(Ω) ≤ 1} the unit ball of the Korobov space. Then, there exists c > 0 with EN (K)X ≥ cN2 (logN) d−1. Equivalently, for > 0, a continuous function approximator approximating K within in L∞ norm uses at least Θ( − 1 2 (log 1 ) d−1 2 ) parameters. Sketch of proof We seek an appropriate subspaceXN+1 in order to get lower bound the Bernstein width bN (K)X , which in turn provides a lower bound on the approximation error (Theorem 5.1). To do so, we use the Deslaurier-Dubuc interpolet of degree 2, φ(2) (see Figure 6) which is C2. Using the sparse grids approach, we construct a hierarchical basis in X2,∞(Ω) using φ(2) as mother function and define XN+1 as the approximation space V (1) n . Here n is chosen such that the dimension of V (1) n is roughly N + 1. The goal is to estimate sup{ρ : ρU(XN+1) ⊂ K}, which will lead to a bound on bN (K)X . To do so, we upper bound the Korobov norm by the L∞ norm for elements of XN+1. For any function u ∈ XN+1 we can write u = ∑ l,i vl,i ·φl,i. Using a stencil representation of the coefficients vl,i, we are able to obtain an upper bound |u|X2,∞ ≤ Γd‖u‖∞ where Γd = O(22nnd−1). Then, bN (K)X ≥ 1/Γd which yields the desired bound. This lower bound matches within a logarithmic factor the upper bound on the number of training parameters needed by deep and shallow neural networks to approximate the Korobov space within : O( − 12 (log 1 ) 3(d−1) 2 ) (Theorem 3.1 and Theorem 4.1). It exhibits the same exponential dependence in dwith base log 1 and the same main dependence on of − 12 . Note that the upper and lower bound can be rewritten as O( −1/2−δ) for all δ > 0. Moreover, our constructions in Theorem 3.1 and Theorem 4.1 are continuous, which comes directly from the continuity of the sparse grid parameters (see bound on vl,i in Theorem 2.2). Our bounds prove therefore that deep and shallow neural networks are near optimal classes of function approximators for the Korobov space. Interestingly, the subspace XN+1 our proof uses to show the lower bound is essentially the same as the subspace we use to approximate Koborov functions in our proof of the upper bounds (Theorem 3.1 and 4.1). The difference is in the choice of the interpolate φ to construct the basis functions, degree 2 for the former (which provides needed regularity for the proof), and 1 for the later. 6 CONCLUSION AND DISCUSSION We proved new upper and lower bounds on the number of neurons and training parameters needed by shallow and deep neural networks to approximate Korobov functions. Our work shows that shallow and deep networks not only lessen the curse of dimensionality but are also near-optimal. Our work suggests several extensions. First, it would be very interesting to see if our proposed theoretical near-optimal architectures have powerful empirical performance. While commonly used structures (e.g. Convolution Neural Networks, or Recurrent Neural Networks) are motivated by properties of the data such as symmetries, our structures are motivated by theoretical insights on how to optimally approximate a large class of functions with a given number of neurons and parameters. Second, our upper bounds (Theorem 3.1 and 4.1) nearly match our lower bounds (Theorem 5.2) on the least number of training parameters needed to approximate the Korobov space. We wonder if it is possible to close the gap between these bounds and hence prove neural network’s optimality, e.g., one could prove that sparse grids are optimal function approximators by improving our lower bound to match sparse grid number of parameters O( − 12 (log 1 ) 3(d−1) 2 ). Finally, we showed the near-optimality of neural networks among the set of continuous function approximators. It would be interesting to explore lower bounds (analog to Theorem 5.2) when considering larger sets of function approximators, e.g., discontinuous function approximators. Could some discontinuous neural network construction break the curse of dimensionality for the Sobolev space? The question is then whether neural networks are still near-optimal in these larger sets of function approximators. ACKNOWLEDGMENTS The authors are grateful to Tomaso Poggio and the MIT 6.520 course teaching staff for several discussions, remarks and comments that were useful to this work. APPENDIX A ON KOROBOV FUNCTIONS In this section, we further discuss Korobov functions X2,p(Ω). Korobov functions enjoy more smoothness than Sobolev functions: smoothness for X2,p(Ω) is measured in terms of mixed derivatives of order two. Korobov functions X2,p(Ω) can be differentiated twice in each coordinates simultaneously, while Sobolev W 2,p(Ω) functions can only be differentiated twice in total. For example, in two dimensions, for a function f to be Korobov it is required to have ∂f ∂x1 , ∂f ∂x2 , ∂2f ∂2x1 , ∂2f ∂2x2 , ∂2f ∂x1∂x2 , ∂3f ∂2x1∂x2 , ∂3f ∂x1∂2x2 , ∂4f ∂2x1∂2x2 ∈ Lp(Ω), while for f to be Sobolev it requires only ∂f ∂x1 , ∂f ∂x2 , ∂2f ∂2x1 , ∂2f ∂2x2 , ∂2f ∂x1∂x2 ∈ Lp(Ω). The former can be seen from |α|∞ ≤ 2 and the latter from |α|1 ≤ 2 in the definition of Xr,p(Ω) and W r,p(Ω). We now provide intuition on why Korobov functions are easier to approximate. One of the key difficulties in approximating Sobolev functions are possible high frequency oscillations which may require an exponential number of neurons (Telgarsky, 2016). For instance, consider functions which have similar structure to W(n...n) (defined in Subsection 2.2): for any smooth basis function φ with support on the unit cube (see Figure 6 for example), consider the linear function space formed by linear combinations of dilated function φ with support on each cube d-dimensional grid of step 2−n. This corresponds exactly to the construction of W(n...n) which uses the product of hat function on each dimension as basis function φ. This function space can have strong oscillations in all directions at a time. The Korobov space prohibits such behavior by ensuring that functions can be differentiated twice on each dimension, simultaneously. As a result, functions cannot oscillate in all directions at a time without having large Korobov norm. We end this paragraph by comparing the Korobov space to the space of bandlimited functions which was shown to avoid the curse of dimensionality (Montanelli et al., 2019). These are functions for which the support frequency components is restricted to a fixed compact. Intuitively, approximating these functions can be achieved because the set of frequencies is truncated to a compact, which then allows to sample frequencies and obtain approximation guarantees. Instead of imposing a hard constraint of cutting high frequencies, the Korobov space asks for smoothness conditions which do not prohibit high frequencies but rather impose a budget for high frequency oscillations. We precise this idea in the next example. A concrete example of Korobov functions is given by an analogous of the function space V (1)n which we used as approximation space in the proof of our results (see Section 2.2). Similarly to the previous paragraph, one should use a smooth basis function to ensure differentiability. Recall that V (1) n is defined as V (1)n := ⊕ |l|1≤n+d−1 Wl. Intuitively, this approximation space introduces a ”budget” of oscillations for all dimensions through the constraint ∑d i=1 li ≤ n + d − 1. As a result, dilations of the basis function can only occur in a restricted set of directions at a time, which ensures that the Korobov norm stays bounded. B PROOFS OF SECTION 3 B.1 APPROXIMATING THE PRODUCT FUNCTION In this subsection, we construct a neural network architecture with two layers and O(d 32 − 12 log 1 ) neurons that approximates the product function p : x ∈ [0, 1]d 7−→ ∏n i=1 xi within for all > 0, which proves Proposition 3.2. We first prove a simple lemma to represent univariate piece-wise affine functions by shallow neural networks. Lemma B.1. Any one dimensional continuous piece-wise affine function with m pieces is representable exactly by a shallow neural network with ReLU activation, with m neurons on a single layer. Proof. This is a simple consequence from Proposition 1 in Yarotsky (2017). We recall the proof for completeness. Let x1 ≤ · · · ≤ xm−1 be the subdivision of the piece-wise affine function f . We use a neural network of the form g(x) := f(x1) + m−1∑ k=1 wk(x− xk)+ − w0(x1 − x)+, where w0 is the slope of f on the piece ≤ x1, w1 is the slope of f on the piece [x1, x2], wk = f(xk+1)− f(x1)− ∑k−1 i=1 wi(xk+1 − xi) xk+1 − xk , for k = 1, · · · ,m−2, and wm−1 = w̃− ∑m−2 k=1 wk where w̃ is the slope of f on the piece≥ xm−1. Notice that f and g coincide on all xk for 1 ≤ k ≤ m − 1. Furthermore, g has same slope as f on each pieces, therefore, g = f . We can approximate univariate right continuous functions by piece-wise affine functions, and then use Lemma B.1 to represent them by shallow neural networks. The following lemma shows that O( −1) neurons are sufficient to represent an increasing right-continuous function with a shallow neural network. Lemma B.2. Let f : I −→ [c, d] be a right-continuous increasing function where I is an interval, and let > 0. There exists a shallow neural network with ReLU activation, with ⌈ d−c ⌉ neurons on a single layer, that approximates f within for the infinity norm. Proof. Let m = ⌊ d−c ⌋ Define a subdivision of the image interval c ≤ y1 ≤ . . . ≤ ym ≤ d where yk = c+k for k = 1, . . . ,m. Note that this subdivision contains exactly ⌈ d−c ⌉ pieces. Now define a subdivision of I , x1 ≤ x2 ≤ . . . ≤ xm by xk := sup{x ∈ I, f(x) ≤ yk}, for k = 1, . . . ,m. This subdivision stills has ⌈ d−c ⌉ pieces. We now construct our approximation function f̂ on I as the continuous piece-wise affine function on the subdivision x1 ≤ . . . ≤ xm such that f̂(xk) = yk for all 1 ≤ k ≤ m and f̂ is constant before x1 and after xm (see Figure 4). Let x ∈ I . • If x ≤ x1, because f is increasing and right-continuous, c ≤ f(x) ≤ f(x1) ≤ y1 = c+ . Therefore |f(x)− f̂(x)| = |f(x)− (c+ )| ≤ . • If xk < x ≤ xk+1, we have yk < f(x) ≤ f(xk+1) ≤ yk+1. Further note that yk ≤ f̂(x) ≤ yk+1. Therefore |f(x)− f̂(x)| ≤ yk+1 − yk = . • If xm < x, then ym < f(x) ≤ d. Again, |f(x)− f̂(x)| = |f(x)− ym| ≤ d− ym ≤ . Therefore ‖f − f̂‖∞ ≤ . We can now use Lemma B.1 to end the proof. If the function to approximate has some regularity, the number of neurons needed for approximation can be significantly reduced. In the following lemma, we show that O( − 12 ) neurons are sufficient to approximate a C2 univariate function with a shallow neural network. Lemma B.3. Let f : [a, b] −→ [c, d] ∈ C2, and let > 0. There exists a shallow neural network with ReLU activation, with 1√ 2 min( ∫ √ |f ′′|(1 + µ(f, )), (b − a) √ ‖f ′′‖∞) neurons on a single layer, where µ(f, )→ 1 as → 0, that approximates f within for the infinity norm. Proof. See Appendix B.2.1. We will now use the ideas of Lemma B.2 and Lemma B.3 to approximate a truncated log function, which we will use in the construction of our neural network approximating the product. Corollary B.4. Let > 0 sufficiently small and δ > 0. Consider the truncated logarithm function log : [δ, 1] −→ R. There exists a shallow neural network with ReLU activation, with − 12 log 1δ neurons on a single layer, that approximates f within for the infinity norm. Proof. See Appendix B.2.2. We are now ready to construct a neural network approximating the product function and prove Proposition 3.2. The proof builds upon on the observation that ∏d i=1 xi = exp( ∑d i=1 log xi). We construct an approximating 2-layer neural network where the first layer computes log xi for 1 ≤ i ≤ d, and the second layer computes the exponential. We illustrate the construction of the proof in Figure 2. Proof of Proposition 3.2 Fix > 0. Consider the function h : x ∈ [0, 1] 7→ max(log x, log ) ∈ [log , 0]. Using Corollary B.4, there exists a neural network ĥ : [0, 1] −→ [log , 0] with 1 + dd 12 − 12 log 1 e neurons on a single layer such that ‖h − ĥ ‖∞ ≤ d . Indeed, one can take the -approximation of h : x ∈ [ , 1] 7→ log x ∈ [log , 0], then extend this function to [0, ] with a constant equal to log . The resulting piece-wise affine function has one additional segment corresponding to one additional neuron in the approximating function. Similarly, consider the exponential g : x ∈ R− 7→ ex ∈ [0, 1]. Because g is right-continuous increasing, we can use Lemma B.3 to construct a neural network ĝ : R− −→ [0, 1] with 1 + ⌈ 1√ 2 log 1 ⌉ neurons on a single layer such that ‖g − ĝ ‖∞ ≤ . Indeed, again one can take the -approximation of g : x ∈ [log , 0] 7→ ex ∈ [0, 1], then extend this function to (−∞, log ] with a constant equal to . The corresponding neural network has an additional neuron. We construct our final neural network φ̂ (see Figure 2) as φ̂ = ĝ ( d∑ i=1 ĥ (xi) ) . Note that φ̂ can be represented as a 2-layer neural network: the first layer is composed of the union of the 1+dd 12 − 12 log 1 e neurons composing each of the 1-layer neural networks ĥ i : x ∈ [0, 1]d 7→ ĥ (xi) ∈ R for each dimension i ∈ {1, . . . , d}. The second layer is composed of the 1+ ⌈ 1√ 2 log 1 ⌉ neurons of ĝ . Hence, the constructed neural network φ̂ has O(d 3 2 − 1 2 log 1 ) neurons. Let us now analyze the approximation error. Let x ∈ [0, 1]d. For the sake of brevity, denote ŷ = ∑d i=1 ĥ (xi) and y = ∑d i=1 log(xi). We have, |φ̂ (x)− p(x)| ≤ |φ̂ (x)− exp(ŷ)|+ | exp(ŷ)− exp(y)| ≤ + d∏ i=1 xi · | exp(ŷ − y)− 1|, where we used the fact that |φ̂ (x)− exp(ŷ)| = |ĝ (ŷ)− g(ŷ)| ≤ ‖ĝ − g‖∞ ≤ . First suppose that x ≥ . In this case, for all i ∈ {1, . . . , d} we have |ĥ (xi)− log(xi)| = |ĥ (xi)− h (xi)| ≤ d . Then, |ŷ−y| ≤ . Consequently, |φ̂ (x)−p(x)| ≤ +max(|e −1|, |e− −1|) ≤ 3 , for > 0 sufficiently small. Without loss of generality now suppose x1 ≤ . Then ŷ ≤ h (x1) ≤ log , so by definition of ĝ , we have 0 ≤ φ̂ (x) = ĝ (ŷ) ≤ exp(log ) = . Also, 0 ≤ p(x) ≤ so finally |φ̂ (x)− p(x)| ≤ . Remark B.5. Note that using Lemma B.2 instead of Lemma B.3 to construct approximating shallow networks for log and exp would yield approximation functions ĥ withO( ⌈ d log 1 ⌉ ) neurons and ĝ with O( ⌈ 1 ⌉ ) neurons. Therefore, the corresponding neural network would approximate the product p with O(d2 −1 log 1 ) neurons. B.2 MISSING PROOFS OF SECTION B.1 B.2.1 PROOF OF LEMMA B.3 Proof. Similarly as the proof of Lemma B.2, the goal is to approximate f by a piece-wise affine function f̂ defined on a subdivision x0 = a ≤ x1 ≤ . . . ≤ xm ≤ xm+1 = b such that f and f̂ coincide on x0, . . . , xm+1. We first analyse the error induced by a linear approximation of the function on each piece. Let x ∈ [u, v] for u, v ∈ I . Using the mean value theorem, there exists αx ∈ [u, x] such that f(x) − f(u) = f ′(αx)(x − u) and βx ∈ [x, v] such that f(v) − f(x) = f ′(βx)(v − x). Combining these two equalities, we get, f(x)− f(u)− (x− u)f(v)− f(u) v − u = (v − x)(f(x)− f(u))− (x− u)(f(v)− f(x)) v − u = (x− u)(x− v)f ′(βx)− f ′(αx) v − u = (x− u)(v − x) ∫ βx αx f ′′(t)dt v − u Hence, f(x) = f(u) + (x− u)f(v)− f(u) v − u + (x− u)(v − x) ∫ βx αx f ′′(t)dt v − u . (1) We now apply this result to bound the approximation error on each pieces of the subdivision. Let k ∈ [m]. Recall f̂ is linear on the subdivision [xk, xk+1] and f̂(xk) = f(xk) and f̂(xk+1) = f(xk+1). Hence, for all x ∈ [xk, xk+1], f̂(x) = f(xk) + (x− xk) f(xk+1)−f(xk)xk+1−xk . Using Equation equation 1 with u = xk and v = xk+1, we get, ‖f − f̂‖∞,[xk,xk+1] ≤ sup x∈[xk,xk+1] ∣∣∣∣∣(x− xk)(xk+1 − x) ∫ βx αx f ′′(t)dt xk+1 − xk ∣∣∣∣∣ ≤ 1 2 (xk+1 − xk) ∫ xk+1 xk |f ′′(t)|dt ≤ 1 2 (xk+1 − xk)2‖f ′′‖∞,[xk,xk+1]. Therefore, using a regular subdivision with step √ 2 ‖f ′′‖∞ yields an -approximation of f with⌈ (b−a) √ ‖f ′′‖∞√ 2 ⌉ pieces. We now show that for any µ > 0, there exists an -approximation of f with at most ∫ √ |f ′′|√ 2 (1 + µ) pieces. To do so, we use the fact that the upper Riemann sum for √ f ′′ converges to the integral since √ f ′′ is continuous on [a, b]. First define a partition a = X0 ≤ XK = b of [a, b] such that the upper Riemann sum R( √ f ′′) on this subdivision satisfies R( √ f ′′) ≤ (1 + µ/2) ∫ b a √ f ′′. Now define on each interval Ik of the partition a regular subdivision with step √ 2 ‖f ′′‖Ik as before. Finally, consider the subdivision union of all these subdivisions, and construct the approximation f̂ on this final subdivision. By construction, ‖f − f̂‖∞ ≤ because the inequality holds on each piece of the subdivision. Further, the number of pieces is K−1∑ i=0 1 + (Xi+1 −Xi) sup[Xi,Xi+1] √ f ′′ √ 2 = R( √ f ′′)√ 2 +K ≤ ∫ √ |f ′′|√ 2 (1 + µ), for > 0 small enough. Using Lemma B.1 we can complete the proof. and O( −1 ( log 1 ) 3(d−1) 2 +1) neurons on the second layer. B.2.2 PROOF OF COROLLARY B.4 Proof. In view of Lemma B.3, the goal is to show that we can remove the dependence of µ(f, ) in δ. This essentially comes from the fact that the upper Riemann sum behaves well for approximating log. Consider the subdivision x0 := δ ≤ x1 ≤ . . . ≤ xm ≤ xm+1 := 1 with m = ⌊ 1 ̃ log 1 δ ⌋ where ̃ := log(1 + √ 2 ), such that xk = elog δ+k̃, for k = 0, . . . ,m − 1. Denote f̂ the corresponding piece-wise affine approximation. Similarly to the proof of Lemma B.3, for k = 0, . . . ,m− 1, ‖ log−f̂‖∞,[xk,xk+1] ≤ 1 2 (xk+1 − xk)2‖f ′′‖∞,[xk,xk+1] ≤ (ẽ − 1)2 2 ≤ . The proof follows. B.3 PROOF OF THEOREM 3.1: APPROXIMATING THE KOROBOV SPACE X2,∞(Ω) In this subsection, we prove Theorem 3.1 and show that we can approximate any Korobov function f ∈ X2,∞(Ω) within with a 2-layer neural network ofO( − 12 (log 1 ) 3(d−1) 2 ) neurons. We illustrate the construction in Figure 5. Our proof combines the constructed network approximating the product function and a decomposition of f as a sum of separable functions, i.e. a decomposition of the form f(x) ≈ K∑ k=1 d∏ j=1 φ (k) j (xj), ∀x ∈ [0, 1] d. Consider the sparse grid construction of the approximating space V (1)n using the standard hat function as mother function to create the hierarchical basisWl (introduced in Section 2.2). We recall that the approximation space is defined as V (1)n := ⊕ |l|1≤n+d−1Wl. We will construct a neural network approximating the sparse grid approximation and then use the result of Theorem 2.2 to derive the approximation error. Figure 5 gives an illustration of the construction. Let f (1)n be the projection of f on the subspace V (1) n . f (1) n can be written as f (1)n (x) = ∑ (l,i)∈U(1)n vl,iφl,i(x), where U (1)n contains the indices (l, i) of basis functions present in V (1) n i.e. U (1)n := {(l, i), |l|1 ≤ n+ d− 1, 1 ≤ i ≤ 2l − 1, ij odd for all 1 ≤ j ≤ d}. (2) Throughout the proof, we explicitly construct a neural network that uses this decomposition to approximate f (1)n . We then use Theorem 2.2 and choose n carefully such that f (1) n approximates f within for L∞ norm. Note that the basis functions can be written as a product of univariate functions φl,i = ∏d j=1 φlj ,ij . We can therefore use the product approximation of Proposition 3.2 to approximate the basis functions. Specifically, we will use one layer to approximate the terms log φlj ,ij and a second layer to approximate the exponential. We now present in detail the construction of the first layer. First, recall that φlj ,ij is a piece-wise affine function with subdivision 0 ≤ ij−1 2lj ≤ ij 2lj ≤ ij+1 2lj ≤ 1. Define the error term ̃ := 2|f |2,∞ . We consider a symmetric subdivision of the interval [ ij−1+̃ 2lj , ij+1−̃ 2lj ] . We define it as follows: x0 = ij−1+̃ 2lj ≤ x1 ≤ · · · ≤ xm+1 = ij2lj ≤ xm+2 ≤ · · · ≤ x2m+2 = ij+1−̃ 2lj where m =⌊ 1 0 log 1̃ ⌋ and 0 := log(1 + √ 2̃/d), such that xk = ij − 1 + elog ̃+k 0 2lj 0 ≤ k ≤ m, xk = ij + 1− elog ̃+(2m+2−k)k 0 2lj m+ 2 ≤ k ≤ 2m+ 2. Note that with this definition, the terms log(2ljxk − ij) form a regular sequence with step 0. We now construct the piece-wise affine function ĝlj ,ij on the subdivision x0 ≤ · · · ≤ x2m+2 which coincides with log φlj ,ij on x0, · · · , x2m+2 and is constant on [0, x0] and [x2m+2, 1]. By Lemma B.1, this function can be represented by a 1-layer neural network with as much neurons as the number of pieces of ĝ, i.e. at most 2 √ 3d ̃ log 1 ̃ neurons for sufficiently small. A similar proof to that of Corol- lary B.4 shows that ĝ approximates max(log φlj ,ij , log(̃/3)) within ̃/(3d) for the infinity norm. We use this construction to compute in parallel, ̃/(3d)-approximations of max(log φlj ,ij (xj), log ̃) for all 1 ≤ j ≤ d, and 1 ≤ lj ≤ n, 1 ≤ ij ≤ 2lj where ij is odd. These are exactly the 1-dimensional functions that we will need, in order to compute the d-dimensional function basis of the approximation space V (1)n . There are d(2n − 1) such univariate functions, therefore our first layer contains at most 2n+1d √ 3d ̃ log 1 ̃ neurons. We now turn to the second layer. The result of the first two layers will be ̃/3-approximations of φl,i for all (l, i) ∈ U (1)n . Recall that U (1)n contains the indices for the functions forming a basis of the approximation space V (1)n . To do so, for each indexes (l, i) ∈ U (1)n we construct a 1-layer neural network approximating the function exp, which will compute an approximation of exp(ĝl1,i1 +· · ·+ ĝld,id). The approximation of exp is constructed in the same way as for Lemma B.3. Consider a regular subdivision of the interval [log(̃/3), 0] with step √ 2(̃/3), i.e. x0 := log(̃/3) ≤ x1 ≤ · · · ≤ xm ≤ xm+1 = 0 where m = ⌊√ 3 2̃ log 3 ̃ ⌋ , such that xk = log ̃ + k √ 2̃, 0 ≤ k ≤ m. Construct the piece-wise affine function ĥ on the subdivision x0 ≤ · · · ≤ xm+1 which coincides with exp on x0, · · · , xm+1 and is constant on (−∞, x0]. Lemma B.3 shows that ĥ approximates exp on R− within ̃ for the infinity norm. Again, Lemma B.1 gives a representation of ĥ as a 1-layer neural network with as many neurons as pieces in ĥ i.e. 1 + ⌈√ 3 2̃ log 3 ̃ ⌉ . The second layer is the union of 1-layer neural networks approximating exp within ̃/3, for each indexes (l, i) ∈ U (1)n . Therefore, the second layer contains ∣∣∣U (1)n ∣∣∣ (1 + ⌈√ 32̃ log 3̃⌉) neurons. As shown in Bungartz & Griebel (2004),∣∣∣U (1)n ∣∣∣ = n−1∑ i=0 2i· ( d− 1 + i d− 1 ) = (−1)d+2n d−1∑ i=0 ( n+ d− 1 i ) (−2)d−1−i = 2n· ( nd−1 (d− 1)! +O(nd−2) ) . Therefore, the second layer hasO ( 2n n d−1 (d−1)! ̃ − 12 log 1̃ ) neurons. Finally, the output layer computes the weighted sum of the basis functions to approximate f (1)n . Denote by f̂ (1) n the corresponding function of the constructed neural network (see Figure 5), i.e. f̂ (1)n = ∑ (l,i)∈U(1)n vl,i · ĥ d∑ j=1 ĝ(xj) . Let us analyze the approximation error of our neural network. The proof of Proposition 3.2 shows that the output of the two first layers h (∑d j=1 ĝ(·j) ) approximates φl,i within ̃. Therefore, we obtain ‖f (1)n − f̂ (1)n ‖∞ ≤ ̃ ∑ (l,i)∈U(1)n |vl,i|. We now use approximation bounds from Theorem 2.2 on f (1)n . ‖f−f̂ (1)n ‖∞ ≤ ‖f−f (1)n ‖∞+‖f (1)n −f̂ (1)n ‖∞ ≤ 2 · |f |2,∞ 8d ·2−2n ·A(d, n)+ 2|f |2,∞ ∑ (l,i)∈U(1)n |vl,i|, where ∑ (l,i)∈U(1)n |vl,i| ≤ |f |2,∞2−d ∑ i≥0 2−i · ( d− 1 + i d− 1 ) ≤ |f |2,∞. Let us now take n = min { n : 2|f |2,∞ 8d 2−2nA(d, n) ≤ 2 } . Then, using the above inequality shows that the neural network f̂ (1)n approximates f within for the infinity norm. We will now estimate the number of neurons in each layer of this network. Note that n ∼ 1 2 log 2 log 1 and 2n ≤ 4 8 d 2 (2 log 2) d−1 2 (d− 1)! 12 √ |f |2,∞ ( log 1 ) d−1 2 · (1 +O(1)). (3) We can use the above estimates to show that the constructed neural network has at most N1 (resp. N2) neurons on the first (resp. second) layer where N1 ∼ →0 8 √ 6d2 8 d 2 (2 log 2) d−1 2 d! 1 2 · |f |2,∞ ( log 1 ) d+1 2 , N2 ∼ →0 4 √ 3d 3 2 8d/2(2 log 2) 3(d−1) 2 d! 3 2 · |f |2,∞ ( log 1 ) 3(d−1) 2 +1 . This proves the bound the number of neurons. Finally, to prove the bound on the number of training parameters of the network, notice that the only parameters of the network that depend on the function f are the parameters corresponding to the weighs vl,i of the sparse grid decomposition. This number is |U (1)n | = O(2n nd−1 ) = O( − 1 2 (log 1 ) 3(d−1) 2 ). B.4 PROOF OF THEOREM 3.4: GENERALIZATION TO GENERAL ACTIVATION FUNCTIONS We start by formalizing the intuition that a sigmoid-like (resp. ReLU-like) function is a function that resembles the Heaviside (resp. ReLU) function by zooming out along the x (resp. x and y) axis. Lemma B.6. Let σ be a sigmoid-like activation with limit a (resp. b) in −∞ (resp.+∞). For any δ > 0 and error tolerance > 0, there exists a scaling M > 0 such that x 7→ σ(Mx)b−a − a approximates the Heaviside function within outside of (−δ, δ) for the infinity norm. Furthermore, this function has values in [0, 1]. Let σ be a ReLU-like activation with asymptote b · x+ c in +∞. For any δ > 0 and error tolerance > 0, there exists a scaling M > 0 such that x 7→ σ(Mx)Mb approximates the ReLU function within for the infinity norm. Proof. Let δ, > 0 and σ a sigmoid-like activation with limit a (resp. b) in −∞ (resp. +∞). There exists x0 > 0 sufficiently large such that (b−a)|σ(x)−a| ≤ for x ≤ −x0 and (b−a)|σ(x)−b| ≤ for x ≥ x0. It now suffices to take M := x0/δ to obtain the desired result. Now let σ be a ReLU-like activation with oblique asymptote bx in +∞ where b > 0. Let M such that |σ| ≤ Mb for x ≤ 0 and |σ(x) − bx| ≤ Mb for x ≥ 0. One can check that |σ(Mx)Mb | ≤ for x ≤ 0, and |σ(Mx)Mb − x| ≤ for x ≥ 0. Using this approximation, we reduce the analysis of sigmoid-like (resp. ReLU-like) activations to the case of a Heaviside (resp ReLU) activation to prove the desired theorem. Proof of Theorem 3.4 We start by the class of ReLU-like activations. Let σ be a ReLU-like activation function. Lemma B.6 shows that one can approximate arbitrarily well the ReLU activation with a linear map σ. Take the neural network approximator f̂ of a target function f given by Theorem 3.1. At each node, we can add the linear map corresponding to x 7→ σ(Mx)Mb with no additional neuron nor parameter. Because the approximation is continuous, we can take M > 0 arbitrarily large in order to approximate f̂ with arbitrary precision on the compact [0, 1]d. The same argument holds for sigmoid-like activation functions in order to reduce the problem to Heaviside activation functions. Although quadratic approximations for univariate functions similar to Lemma B.3 are not valid for general sigmoid-like activations – in particular the Heaviside — we can obtain an analog to Lemma B.2 as Lemma B.7 given in the Appendix B.4.1. This results is an increased number of neurons. In order to approximate a target function f ∈ X2,∞(Ω), we use the same structure as the neural network constructed for ReLU activations and use the same notations as in the proof of Theorem 3.1. The first difference lies in the approximation of log φlj ,ij in the first layer. Instead of using Corollary B.4, we use Lemma B.7. Therefore, 12d̃ log 3 ̃ neurons are needed to compute a ̃/(3d)-approximation of max(log φlj ,ij , log(̃/3)). The second difference is in the approximation of the exponential in the second layer. Again, we use Lemma B.7 to construct a ̃/3-approximation of the exponential on R− with 6̃ neurons for the second layer. As a result, the first layer contains at most 2n+2 3d 2 ̃ log 1 ̃ neurons for sufficiently small, and the second layer contains ∣∣∣U (1)n ∣∣∣ 6̃ neurons. Using the same estimates as in the proof of Theorem 3.1 shows that the constructed neural network has at mostN1 (resp. N2) neurons on the first (resp. second) layer where N1 ∼ →0 3 · 25 · d5/2 8 d 2 (2 log 2) d−1 2 d! 1 2 |f | 3 2 2,∞ 3 2 ( log 1 ) d+1 2 , N2 ∼ →0 24 · d 32 8 d 2 (2 log 2) 3(d−1) 2 d! 3 2 · |f | 3 2 2,∞ 3 2 ( log 1 ) 3(d−1) 2 . This ends the proof. B.4.1 PROOF OF LEMMA B.7 Lemma B.7. Let σ be a sigmoid-like activation. Let f : I −→ [c, d] be a right-continuous increasing function where I is an interval, and let > 0. There exists a shallow neural network with activation σ, with at most 2d−c neurons on a single layer, that approximates f within for the infinity norm. Proof. The proof is analog to that of Lemma B.2. Let m = bd−c c. We define a regular subdivision of the image interval c ≤ y1 ≤ . . . ≤ ym ≤ d where yk = c + k for k = 1, . . . ,m, then using the monotony of f , we can define a subdivision of I , x1 ≤ . . . ≤ xm such that xk := sup{x ∈ I, f(x) ≤ yk}. Let us first construct an approximation neural network f̂ with the Heaviside activation. Consider f̂(x) := y1 + m−1∑ i=1 1 ( x− xi + xi+1 2 ≥ 0 ) . Let x ∈ [c, d] and k such that x ∈ [xk, xk+1]. We have by monotony yk ≤ f(x) ≤ yk+1 and yk = y1 + (k − 1) ≤ f̂(x) ≤ y1 + k = yk+1. Hence, f̂ approximates f within in infinity norm. Let δ < mini=1,...,m(xi+1−xi)/4 and σ a general sigmoid-like activation with limits a in−∞ and b in +∞. Take M given by Lemma B.7 such that σ(Mx)b−a − a approximates the Heaviside function within 1/m outside of (−δ, δ) and has values in [0, 1]. Using the same arguments as above, the function f̂(x) := y1 + m−1∑ i=1 σ ( Mx−M xi+xi+12 ) b− a − a approximates f within 2 for the infinity norm. The proof follows. C PROOFS OF SECTION 4 C.1 PROOF OF THEOREM 4.1: APPROXIMATING KOROBOV FUNCTIONS WITH DEEP NEURAL NETWORKS Let > 0. We construct a similar structure to the network defined in Theorem 3.1 by using the sparse grid approximation of Subsection 2.2. For a given n, let f (1)n be the projection of f in the approximation space V (1)n (defined in Subsection 2.2) and U (1) n (defined in equation 2) the set of indices (l, i) of basis functions present in V (1)n . Recall f (1) n can be uniquely decomposed as f (1)n (x) = ∑ (l,i)∈U(1)n vl,iφl,i(x). where φl,i = ∏d j=1 φlj ,ij are the basis functions defined in Subsection 2.2. In the first layer, we compute exactly the piece-wise linear hat functions φlj ,ij , then in the next set of layers, we use the product-approximating neural network given by Proposition 4.2 to compute the basis functions φl,i = ∏d j=1 φlj ,ij (see Figure 3). The output layer computes the weighted sum∑ (l,i)∈U(1)n vl,iφl,i(x) and outputs f (1) n . Because the approximation has arbitrary precision, we can chose the network of Proposition 4.2 such that the resulting network f̂ verifies ‖f̂ − f (1)n ‖∞ ≤ /2. More precisely, as φlj ,ij is piece-wise linear with four pieces, we can compute it exactly with four neurons with ReLU activation on a single layer (Lemma B.1). Our first layer is composed of the union of all these ReLU neurons, for the d(2n − 1) indices lj , ij such that 1 ≤ j ≤ d, 1 ≤ lj ≤ n, 1 ≤ ij ≤ 2lj and ij is odd. Therefore, it contains at most d2n+2 neurons with ReLU activation. The second set of layers is composed of the union of product-approximating neural networks to compute φl,i for all (l, i) ∈ U (1)n . This set of layers contains dlog2 de layers with activation σ and at most |U (1)n | · 8d neurons. The output of these two sets of layers is an approximation of the basis functions φl,i with arbitrary precision. Consequently, the final output of the complete neural network is an approximation of f (1)n with arbitrary precision. Similarly to the proof of Theorem 3.1, we can chose the smallest n such that ‖f − f (1)n ‖∞ ≤ /2 (see equation 3 for details). Finally, the network has depth at most log2 d+ 2 and N neurons where N = 8d|U (1)n | ∼ →0 25 · d5/2 8 d 2 (2 log 2) 3(d−1) 2 d! 3 2 · √ |f |2,∞ ( log 1 ) 3(d−1) 2 . The parameters of the network depending on the function are exactly the coefficients vl,i of the sparse grid approximation. Hence, the network has O( − 12 (log 1 ) 3(d−1) 2 ) training parameters. D PROOFS OF SECTION 5 D.1 PROOF OF THEOREM 5.2: NEAR-OPTIMALITY OF NEURAL NETWORKS FOR KOROBOV FUNCTIONS Our goal is to define an appropriate subspace XN+1 in order to get a good lower bound on the Bernstein width bN (K)X , defined in equation 5, which in turn provides a lower bound on the approximation error (Theorem 5.1). To do so, we introduce the Deslaurier-Dubuc interpolet φ
1. What is the focus of the paper regarding neural network approximation? 2. What are the strengths of the proposed approach, particularly in terms of representation power and parameter usage? 3. Do you have any questions regarding the comparison between different function spaces, such as Korobov, Sobolev, and bandlimited functions? 4. How does the paper address the issue of weight magnitude in neural networks, and are there any implications for practical applications? 5. Are there any minor issues or suggestions you have for improving the readability or content of the paper?
Summary Of The Paper Review
Summary Of The Paper This paper studies the ability of shallow and deep neural networks to approximate Korobov functions, and analyses their representation power in terms of the number of used parameters. The authors first show that 2 layers neural networks using common activation functions can approximate any Korobov function within eps error in infinity norm using O(eps^{-1/2} (log 1/eps)^{3(d-1)/2}) parameters. This result improves the existing upper bound of O(eps^{-d/r}) parameters for approximating Sobolev functions in W^{r,p} using 1-hidden layer neural networks, hence reducing the curse of dimensionality. For deep neural network, this paper provides a new result, showing that neural networks with depth O(log d) and O(eps^{-1/2} (log 1/eps)^{3(d-1)/2}) approximate Korobov functions in X^{2, infinity} within eps error in L-infinity norm. This improves the result of [1], by requiring a depth independent of the required accuracy. However, the current work requires a C^2 and non-linear activation function, and is hence not applicable to ReLU. Finally, the authors show that any continuous function approximator for the Korobov space requires O(eps^{-1/2} (log 1/eps)^{(d-1)/2}) parameters for achieving error eps, hence matching the previous upper bound for NNs up to factor (log 1/eps)^{d-1}. [1] Hadrien Montanelli, Haizhao Yang, and Qiang Du. Deep ReLU networks overcome the curse of dimensionality for bandlimited functions Review I found paper to be very well written and easy to follow. The authors provide simple constructive proofs of the theorems, whose high-level ideas are well explained in the main text. The discussion about the difference between Korobov and Sobolev spaces (Appendix A) is greatly appreciated. You mentioned there, that functions in Sobolev spaces can have more oscillations in various dimensions, and this can be the reason why they seem to be harder to approximate. It would be interesting to have a similar discussion to compare the Korobov space with the space of bandlimited function, which seem to avoid the curse of dimensionality. In particular, what do we intuitively gain in terms of expression power when extending the space of function from bandlimited functions to Korobov space ? One question that would be interesting for practical purpose: By increasing the weights of the NN, one can approximate a function with any Korobov norm. In the proposed construction, what is the maximal weight magnitude that is used (in terms of dimension, accuracy and target function's Korobov norm) ? If the weight magnitude is restricted to a certain value, is it possible to design with a modified construction that account for such a constraint, and how does the number of required neurons/parameters would be affected. This question is of interest especially in scenarios where NNs are trained using weight decay or other regularisation. Minor: Pages 23-28 contain the formatting instruction which are of minor interest... P.11 Sobolov p.17 This proves the bound on the number of neutrons.
ICLR
Title Shallow and Deep Networks are Near-Optimal Approximators of Korobov Functions Abstract In this paper, we analyze the number of neurons and training parameters that a neural network needs to approximate multivariate functions of bounded second mixed derivatives — Korobov functions. We prove upper bounds on these quantities for shallow and deep neural networks, drastically lessening the curse of dimensionality. Our bounds hold for general activation functions, including ReLU. We further prove that these bounds nearly match the minimal number of parameters any continuous function approximator needs to approximate Korobov functions, showing that neural networks are near-optimal function approximators. 1 INTRODUCTION Neural networks have known tremendous success in many applications such as computer vision and pattern detection (Krizhevsky et al., 2017; Silver et al., 2016). A natural question is how to explain their practical success theoretically. Neural networks are shown to be universal (Hornik et al., 1989): any Borel-measurable function can be approximated arbitrarily well by a neural network with sufficient number of neurons. Furthermore, universality holds for as low as 1-hidden-layer neural network with reasonable activation functions. However, these results do not specify the needed number of neurons and parameters to train. If these numbers are unreasonably high, the universality of neural networks would not explain their practical success. We are interested in evaluating the number of neurons and training parameters needed to approximate a given function within with a neural network. An interesting question is how do these numbers scale with and the dimensionality of the problem, i.e., the number of variables. Mhaskar (1996) showed that any function of the Sobolev space of order r and dimension d can be approximated within with a 1-layer neural network with O( − dr ) neurons and an infinitely differentiable activation function. This bound exhibits the curse of dimensionality: the number of neurons needed for an −approximation scales exponentially in the dimension of the problem d. Thus, Mhaskar’s bound raises the question of whether this curse is inherent to neural networks. Towards answering this question, DeVore et al. (1989) proved that any continuous function approximator (see Section 5) that approximates all Sobolev functions of order r and dimension d within , needs at least Θ( − d r ) parameters. This result meets Mhaskar’s bound and confirms that neural networks cannot escape the curse of dimensionality for the Sobolev space. A main question is then for which set of functions can neural networks break this curse of dimensionality. One way to circumvent the curse of dimensionality is to restrict considerably the considered space of functions and focus on specific structures adapted to neural networks. For example, Mhaskar et al. (2016) showed that compositional functions with regularity r can be approximated within with deep neural networks with O(d · − 2r ) neurons. Other structural constraints have been considered for compositions of functions (Kohler & Krzyżak, 2016), piecewise smooth functions (Petersen & Voigtlaender, 2018; Imaizumi & Fukumizu, 2019), or structures on the data space, e.g., lying on a manifold (Mhaskar, 2010; Nakada & Imaizumi, 2019; Schmidt-Hieber, 2019). Approximation bounds have also been obtained for the function approximation from data under smoothness constraints (Kohler & Krzyżak, 2005; Kohler & Mehnert, 2011) and specifically on mixed smooth Besov spaces which are known to circumvent the curse of dimensionality (Suzuki, 2018). Another example is the class of Sobolev functions of order d/α and dimension d for which Mhaskar’s bound becomes O( −α). Recently, Montanelli et al. (2019) considered bandlimited functions and showed that they can be approximated within by deep networks with depth O((log 1 ) 2) and O( −2(log 1 ) 2) neurons. Weinan et al. (2019) showed that the closure of the space of 2-layer neural networks with specific regularity (namely a restriction on the size of the network’s weights) is the Barron space. They further show that Barron functions can be approximated within with 2-layer networks with O( −2) neurons. Similar line of work restrict the function space with spectral conditions, to write functions as limits of shallow networks (Barron, 1994; Klusowski & Barron, 2016; 2018). In this work, we are interested in more general and generic spaces of functions. Our space of interest is the space of multivariate functions of bounded second mixed derivatives, the Korobov space. This space is included in the Sobolev space but is reasonably large and general. The Korobov space presents two motivations. First, it is a natural candidate for a large and general space included in the Sobolev space where numerical approximation methods can overcome the curse of dimensionality to some extent (see Section 2.1). Second, Korobov spaces are practically useful for solving partial differential equations (Korobov, 1959) and have been used for high-dimensional function approximation (Zenger & Hackbusch, 1991; Zenger, 1991). Recently, Montanelli & Du (2019) showed that deep neural networks with depth O(log 1 ) and O( − 12 (log 1 ) 3(d−1) 2 +1) neurons can approximate Korobov functions within , lessening the curse of dimensionality for deep neural networks asymptotically in . While they used deep structures to prove their result, the question of whether shallow neural networks also break the curse of dimensionality for the Korobov space remains open. In this paper, we study deep and shallow neural network’s approximation power for the Korobov space and make the following contributions: 1. Representation power of shallow neural networks. We prove that any Korobov function can be approximated within with a 2-layer neural network with ReLU activation function with O( −1(log 1 ) 3(d−1) 2 +1) neurons and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters (Theorem 3.1). We further extend this result to a large class of commonly used activation functions (Theorem 3.4). Asymptotically in , our bound can be written as O( −1−δ) for all δ > 0, and in that sense breaks the curse of dimensionality for shallow neural networks. 2. Representation power of deep neural networks. We show that any function of the Korobov space can be approximated within with a deep neural network of depth dlog2(d)e+ 1 independent of , with non-linear C2 activation function,O( − 12 (log 1 ) 3(d−1) 2 ) neurons andO( − 12 (log 1 ) 3(d−1) 2 ) training parameters (Theorem 4.1). This result improves that of Montanelli & Du (2019) who constructed an approximating neural network with larger depth O(log 1 log d) –increasing with – and larger number of neurons O( − 12 (log 1 ) 3(d−1) 2 +1). However, they used ReLU activation function. 3. Near-optimality of neural networks as function approximators. Under the continuous function approximators model introduced by DeVore et al. (1989), we prove that any continuous function approximator needs Θ( − 1 2 (log 1 ) d−1 2 ) parameters to approximate Korobov functions within (Theorem 5.2). This lower bound nearly matches our established upper bounds on the number of training parameters needed by deep and shallow neural networks to approximate functions of the Korobov space, proving that they are near-optimal function approximators of the Korobov space. Table 1 summarizes our new bounds and existing bounds on shallow and deep neural network approximation power for the Korobov space, Sobolev space and bandlimited functions. Our proofs are constructive and give explicit structures to construct such neural networks with ReLU and general activation functions. Our constructions rely on sparse grid approximation introduced by Zenger (1991), and studied in detail in Bungartz (1992); Bungartz & Griebel (2004). Specifically, we use the sparse grid approach to approximate smooth functions with sums of products then construct neural networks which approximate this structure. A key difficulty is to approximate the product function. In particular in the case of shallow neural networks, we propose, to the best of our knowledge, the first architecture approximating the product function with polynomial number of neurons. To derive our lower bound on the number of parameters needed to approximate the Korobov space, we construct a linear subspace of the Korobov space, with large Bernstein width. This subspace is then used to apply a general lower bound on nonlinear approximation derived by DeVore et al. (1989). The rest of the paper is structured as follows. In Section 2, we formalize our objective and introduce the sparse grids approach. In Section 3 (resp. 4), we prove our bounds on the number of neurons and training parameters for Korobov functions approximation with shallow (resp. deep) networks. Finally, we formalize in Section 5 the notion of optimal continuous function approximators and prove our novel near-optimality result. 2 PRELIMINARIES In this work, we consider feed-forward neural networks, using a linear output neuron and a nonlinear activation function σ : R→ R for the other neurons, such as the popular rectified unit (ReLU) σ(x) = max(x, 0), the sigmoid σ(x) = (1 + e−x)−1 or the Heaviside function σ(x) = 1{x≥0}. Let d ≥ 1 be the dimension of the input. We define a 1-hidden-layer network with N neurons as x 7→ ∑N k=1 ukσ(w > k x+ bk), where wk ∈ Rd, bk ∈ R for i = 1, . . . , N , are parameters. A neural network with several hidden layers is obtained by feeding the outputs of a given layer as inputs to the next layer. We study the expressive power of neural networks, i.e., the ability to approximate a target function f : Rd → R with as few neurons as possible, on the unit hyper-cube Ω := [0, 1]d. Another relevant metric is the number of parameters that need to be trained to approximate the function, i.e., the number of parameters of the approximating network (uk,wk and bk) depending on the function to approximate. We will adopt L∞ norm as a measure of approximation error. We now define some notations necessary to introduce our function spaces of interest. For an integer r, we denote Cr the space of one dimensional functions differentiable r times and with continuous derivatives. In our analysis, we consider functions f with bounded mixed derivatives. For a multi-index α ∈ Nd, the derivative of order α is Dαf := ∂ |α|1f ∂x α1 1 ...∂x αd d , where |α|1 = ∑d i=1 |αi|. Two common function spaces in a compact Ω ⊂ Rd are the Sobolev spaces W r,p(Ω) of functions having weak partial derivatives up to order r in Lp(Ω) and the Korobov spacesXr,p(Ω) of functions vanishing at the boundary and having weak mixed second derivatives up to order r in Lp(Ω). W r,p(Ω) = {f ∈ Lp(Ω) : Dαf ∈ Lp(Ω), |α|1 ≤ r}, Xr,p(Ω) = {f ∈ Lp(Ω) : f |∂Ω = 0, Dαf ∈ Lp(Ω), |α|∞ ≤ r}. where ∂Ω denotes the boundary of Ω, |α|1 = ∑d i=1 |αi| and |α|∞ = supi=1,...,d |αi| are respectively the L1 and infinity norm. Note that Korobov spaces Xr,p(Ω) are subsets of Sobolev spaces W r,p(Ω). For p =∞, the usual norms on these spaces are given by |f |W r,p(Ω) := max |α|1≤r ‖Dαf‖∞ , |f |Xr,p(Ω) := max|α|∞≤r ‖Dαf‖∞ , For simplicity, we will write | · |2,∞ for | · |X2,∞ . We focus our analysis on approximating functions on the Korobov space X2,∞(Ω) for which the curse of dimensionality is drastically lessened and we show that neural networks are near-optimal. Intuitively, a key difference compared to the Sobolev space is that Korobov functions do not have high frequency oscillations in all directions at a time. Such functions may require an exponential number of neurons Telgarsky (2016) and are one of the main difficulties for Sobolev space approximation, which therefore exhibits the curse of dimensionality DeVore et al. (1989). On the contrary, the Korobov space prohibits such behaviour by ensuring that functions can be differentiable on all dimensions together. Further discussions and concrete examples are given in Appendix A. 2.1 THE CURSE OF DIMENSIONALITY We adopt the point of view of asymptotic results in (or equivalently, in the number of neurons), which is a well-established setting in the neural networks representation power literature (Mhaskar, 1996; Bungartz & Griebel, 2004; Yarotsky, 2017; Montanelli & Du, 2019) and numerical analysis literature (Novak, 2006). In the rest of the paper, we use O notation which hide constants in d. For each result, full dependencies in d are provided in appendix. Previous efforts to quantify the number of neurons needed to approximate large general class of functions, showed that neural networks and most classical functional approximation schemes exhibit the curse of dimensionality. For example, for Sobolev functions, Mhaskar proved the following approximation bound. Theorem 2.1 (Mhaskar (1996)). Let p, r ≥ 1, and σ : R → R be an infinitely differentiable activation function, non-polynomial on any interval of R. Let > 0 sufficiently small. For any f ∈ W r,p, there exists a shallow neural network with one hidden layer, activation function σ, and O( − dr ) neurons approximating f within for the infinity norm. Therefore, the approximation of Sobolev functions by neural networks suffers from the curse of dimensionality since the number of neurons needed grows exponentially with the input space dimension d. This curse is not due to poor performance of neural networks but rather to the choice of the Sobolev space. DeVore et al. (1989) proved that any learning algorithm with continuous parameters needs at least Θ( − d r ) parameters to approximate the Sobolev spaceW r,p. This shows that the class of Sobolev functions suffers inherently from the curse of dimensionality and no continuous function approximator can overcome it. We detail this notion later in Section 5. The natural question is whether there exists a reasonable and sufficiently large class of functions for which there is no inherent curse of dimensionality. Instead of the Sobolev space, we aim to add more regularity to overcome the curse of dimensionality while preserving a reasonably large space. The Korobov space X2,∞(Ω)—functions with bounded mixed derivatives—is a natural candidate: it is known in the numerical analysis community as a reasonably large space where numerical approximation methods can lessen the curse of dimensionality (Bungartz & Griebel, 2004). Korobov functions have been introduced for solving partial differential equations (Korobov, 1959; Smolyak, 1963), and have then been used extensively for high-dimensional function approximation (Zenger & Hackbusch, 1991; Bungartz & Griebel, 2004). This space of functions is included in the Sobolev space, but still reasonably large as the regularity condition concerns only second order derivatives. Two questions are of interest. First, how many neurons and training parameters a neural network needs to approximate any Korobov function within in the L∞ norm? Second, how do neural networks perform compared to the optimal theoretical rates for Korobov spaces? 2.2 SPARSE GRIDS AND HIERARCHICAL BASIS In this subsection, we introduce sparse grids which will be key in our neural networks constructions. These were introduced by Zenger (1991) and extensively used for high-dimensional function approximation. We refer to Bungartz & Griebel (2004) for a thorough review of the topic. The goal is to define discrete approximation spaces with basis functions. Instead of a classical uniform grid partition of the hyper-cube [0, 1]d involving nd components, where n is the number of partitions in each coordinate, the sparse grid approach uses a smarter partitioning of the cube preserving the approximation accuracy while drastically reducing the number of components of the grid. The construction involves a 1-dimensional mother function φ which is used to generate all the functions of the basis. For example, a simple choice for the building block φ is the standard hat function φ(x) := (1 − |x|)+. The hat function is not the only possible choice. In the latter proofs we will specify which mother function is used, in our case either the interpolates of Deslauriers & Dubuc (1989) (which we define rigorously later in our proofs) or the hat function φ which can be seen as the Deslaurier-Dubuc interpolet of order 1. These more elaborate mother functions enjoy more smoothness while essentially preserving the approximation power. Assume the mother function has support in [−k, k]. For j = 1, . . . , d, it can be used to generate a set of local functions φlj ,ij : [0, 1] −→ R for all lj ≥ 1 and 1 ≤ ij ≤ 2lj −1 with support [ ij−k 2lj , ij+k 2lj ] as follows, φlj ,ij (x) := φ(2 ljx− ij), x ∈ [0, 1]. We then define a basis of d-dimensional functions by taking the tensor product of these 1-dimensional functions. For all l, i ∈ Nd with l ≥ 1 and 1 ≤ i ≤ 2l − 1 where 2l denotes (2l1 , . . . , 2ld), define φl,i(x) := ∏d j=1 φlj ,ij (xj), x ∈ Rd. For a fixed l ∈ Nd, we will consider the hierarchical increment space Wl which is the subspace spanned by the functions {φl,i : 1 ≤ i ≤ 2l − 1}, as illustrated in Figure 1, Wl := span{φl,i, 1 ≤ i ≤ 2l − 1, ij odd for all 1 ≤ j ≤ d}. Note that in the hierarchical increment Wl, all basis functions have disjoint support. Also, Korobov functions X2,p(Ω) can be expressed uniquely in this hierarchical basis. Precisely, there is a unique representation of u ∈ X2,p(Ω) as u(x) = ∑ l,i vl,iφl,i(x), where the sum is taken over all multiindices l ≥ 1 and 1 ≤ i ≤ 2l − 1 where all components of i are odd. In particular, all basis functions are linearly independent. Notice that this sum is infinite, the objective is now to define a finite-dimensional subspace of X2,p(Ω) that will serve as an approximation space. Sparse grids use a carefully chosen subset of the hierarchical basis functions to construct the approximation space V (1) n := ⊕ |l|1≤n+d−1Wl. When φ is the hat function, Bungartz and Griebel Bungartz & Griebel (2004) showed that this choice of approximating space leads to a good approximating error. Theorem 2.2 (Bungartz & Griebel (2004)). Let f ∈ X2,∞(Ω) and f (1)n be the projection of f on the subspace V (1)n . We have, ‖f−f (1)n ‖∞ = O ( 2−2nnd−1 ) . Furthermore, if vl,i denotes the coefficient of φl,i in the decomposition of f (1) n in V (1) n , then we have the upper bound |vl,i| ≤ 2−d2−2|l|1 |f |2,∞, for all l, i ∈ Nd with |l|1 ≤ n+ d− 1, 1 ≤ i ≤ 2l − 1 where i has odd components. 3 THE REPRESENTATION POWER OF SHALLOW NEURAL NETWORKS It has recently been shown that deep neural networks, with depth scaling with , lessen the curse of dimensionality on the numbers of neuron needed to approximate the Korobov space Montanelli & Du (2019). However, to the best of our knowledge, the question of whether shallow neural networks with fixed universal depth — independent of and d — escape the curse of dimensionality as well for the Korobov space remains open. We settle this question by proving that shallow neural networks also lessen the curse of dimensionality for Korobov space. Theorem 3.1. Let > 0. For all f ∈ X2,∞(Ω), there exists a neural network with 2 layers, ReLU activation, O( −1(log 1 ) 3(d−1) 2 +1) neurons, and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters that approximates f within for the infinity norm. In order to prove Theorem 3.1, we construct the approximating neural network explicitly. The first step is to construct a neural network architecture with two layers and O(d 32 − 12 log 1 ) neurons that approximates the product function p : x ∈ [0, 1]d 7−→ ∏n i=1 xi within for all > 0. Proposition 3.2. For all > 0, there exists a neural network with depth 2, ReLU activation and O(d 32 − 12 log 1 ) neurons, that approximates the product function p : x ∈ [0, 1] d −→ ∏d i=1 xi within for the infinity norm. Sketch of proof The proof builds upon the observation that p(x) = exp( ∑d i=1 log xi). We construct an approximating 2-layer neural network where the first layer approximates log xi for 1 ≤ i ≤ d, and the second layer approximates the exponential. We illustrate the construction in Figure 2. More precisely, fix > 0. Consider the function h : x ∈ [0, 1] 7→ max(log x, log ). We approximate h within d with a piece-wise affine function with O(d 1 2 − 1 2 log 1 ) pieces, then represent this piece-wise affine function with a single layer neural network ĥ with the same number of neurons as the number of pieces (Lemma B.1, Appendix B.1). This 1-layer network then has ‖h − ĥ ‖∞ ≤ d . The first layer of our final network is the union of d copies of ĥ : one for each dimension i, approximating log xi. Similarly, consider the exponential g : x ∈ R− 7→ ex. We construct a 1-layer neural network ĝ with O( − 1 2 log 1 ) neurons with ‖g − ĝ ‖∞ ≤ . This will serve as second layer. Formally, the constructed network p̂ is p̂ = ĝ (∑d i=1 ĥ (xi) ) . This 2-layer neural network has O(d 32 − 12 log 1 ) neurons and verifies ‖p̂ − p‖∞ ≤ . We use this result to prove Theorem 3.1 and show that we can approximate any Korobov function f ∈ X2,∞(Ω) within with a 2-layer neural network of O( − 12 (log 1 ) 3(d−1) 2 ) neurons. Consider the sparse grid construction of the approximating space V (1)n using the standard hat function as mother function to create the hierarchical basis Wl (introduced in Section 2.2). The key idea is to construct a shallow neural network approximating the sparse grid approximation and then use the result of Theorem 2.2 to derive the approximation error. Let f (1)n be the projection of f on the subspace V (1)n defined in Section 2.2. f (1) n can be written as f (1) n (x) = ∑ (l,i)∈U(1)n vl,iφl,i(x), where U (1)n contains the indices (l, i) of basis functions present in V (1) n . We can use Theorem 2.2 and choose n carefully such that f (1)n approximates f within for L∞ norm. The goal is now to approximate f (1)n with a shallow neural network. Note that the basis functions can be written as a product of univariate functions φl,i = ∏d j=1 φlj ,ij . We can therefore use a similar structure to the product approximation of Proposition 3.2 to approximate the basis functions. Specifically, the first layer approximate the d(2n − 1) = O( − 12 (log 1 ) d−1 2 ) terms log φlj ,ij necessary to construct the basis functions of V (1)n and a second layer to approximate the exponential in order to obtain approximations of the O(2nnd−1) = O( − 12 (log 1 ) 3(d−1) 2 ) basis functions of V (1)n . We provide a detailed figure illustrating the construction, Figure 5 in Appendix B.3. The shallow network that we constructed in Theorem 3.1 uses the ReLU activation function. We extend this result to a larger class of activation functions which include commonly used ones. Definition 3.3. A sigmoid-like activation function σ : R → R is a non-decreasing function having finite limits in ±∞. A ReLU-like activation function σ : R → R is a function having a horizontal asymptote in −∞ i.e. σ is bounded in R−, and an affine (non-horizontal) asymptote in +∞, i.e. there exists b > 0 such that σ(x)− bx is bounded in R+. Most common activation functions fall into these classes. Examples of sigmoid-like activations include Heaviside, logistic, tanh, arctan and softsign activations, while ReLU-like activations include ReLU, ISRLU, ELU and soft-plus activations. We extend Theorem 3.1 to all these activations. Theorem 3.4. For any approximation tolerance > 0, and for any f ∈ X2,∞(Ω) there exists a neural network with depth 2 and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters that approximates f within for the infinity norm, with O ( −1 log( 1 ) 3(d−1) 2 +1) (resp. O( − 32 log( 1 ) 3(d−1) 2 )) neurons for a ReLU-like (resp. sigmoid-like) activation. We note that these results can further be extended to more general Korobov spacesXr,p. Indeed, the main dependence of our neural network architectures in the parameters r and p arise from sparse grid approximation. Bungartz & Griebel (2004) show that results similar to Theorem 2.2 can be extended to various values of r, p and different error norms with a similar sparse grid construction. For instance, we can use these results combined with our proposed architecture to show that the Korobov space Xr,∞ can be approximated in infinite norm by neural networks with O( − 1 r (log 1 ) r+1 r (d−1)) training parameters and same number of neurons up to a polynomial factor in . 4 THE REPRESENTATION POWER OF DEEP NEURAL NETWORKS Montanelli & Du (2019) used the sparse grid approach to construct deep neural networks with ReLU activation, approximating Korobov functions with O( − 12 (log 1 ) 3(d−1) 2 +1) neurons, and depth O(log 1 ) for the L ∞ norm. We improve this bound for deep neural networks with C2 nonlinear activation functions. We prove that we only need O( − 12 (log 1 ) 3(d−1) 2 ) neurons and fixed depth, independent of , to approximate the unit ball of the Korobov space within in the L∞ norm. Theorem 4.1. Let σ ∈ C2 be a non-linear activation function. Let > 0. For any function f ∈ X2,∞(Ω), there exists a neural network of depth dlog2 de + 1, with ReLU activation on the first layer and activation function σ for the next layers, O( − 12 (log 1 ) 3(d−1) 2 ) neurons, and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters approximating f within for the infinity norm. Compared to the bound of shallow networks in Theorem 3.1, the number of neurons for deep networks is lower by a factorO( √ ), while the number of training parameters is the same. Hence, deep neural networks are more efficient than shallow neural network in the sense that shallow networks need more “inactive” neurons to reach the same approximation power, but have the same number of parameters. This gap in the number of “inactive” neurons can be consequent in practice, as we may not know exactly which neurons to train and which neurons to fix. This new bound on the number of parameters and neurons matches the approximation power of sparse grids. In fact, sparse grids use Θ( − 1 2 (log 1 ) 3(d−1) 2 ) parameters (weights of basis functions) to approximate Korobov functions within . Our construction in Theorem 4.1 shows that deep neural networks with fixed depth in can fully encode sparse grids approximators. Neural networks are therefore more powerful function approximators. In particular, any sparse grid approximation using O(N( )) parameters, can be represented exactly by a neural network using O(N( )) neurons. The deep approximating network (see Figure 3) has a very similar structure to our construction of an approximating shallow network of Theorem 3.1. The main difference lies in the approximation of the product function. Instead of using a 2-layer neural network, we now use a deep network. The following result shows that deep neural networks can represent exactly the product function. Proposition 4.2 (Lin et al. (2017), Appendix A). Let σ be C2 non linear activation function. For any approximation error > 0, there exists a neural network with dlog2 de hidden layers and activation σ, using at most 8d neurons arranged in a binary tree network that approximates the product function ∏d i=1 xi on [0, 1] d within for the infinity norm. An important remark is that the structure of the constructed neural network is independent of . In particular, the depth and number of neurons is independent of the approximation precision , which we refer to as exact approximation. It is known that an exponential number of neurons is needed in order to exactly approximate the product function with a 1-layer neural network Lin et al. (2017), however, the question of whether one could approximate the product with a shallow network and a polynomial number of neurons, remained open. In Proposition 3.2, we answer positively to this question by constructing an -approximating neural network of depth 2 with ReLU activation and O(d 32 − 12 log 1 ) neurons. Using the same ideas as in Theorem 3.4, we can generalize this result to 2 ) neurons and depth dlog2 de+ 1. obtain an -approximating neural network of depth 2 withO(d 32 − 12 log 1 ) neurons for a ReLU-like activation, or O(d2 −1 log 1 ) neurons for a sigmoid-like activation. 5 NEURAL NETWORKS ARE NEAR-OPTIMAL FUNCTION APPROXIMATORS In the previous sections, we proved upper bounds on the number of neurons and training parameters needed by deep and shallow neural networks to approximate the Korobov space X2,∞(Ω). We now investigate how good is the performance of neural networks as function approximators. We prove a lower bound on the number of parameters needed by any continuous function approximator to approximate the Korobov space. In particular, neural networks, deep and shallow, will nearly match this lower bound, making them near-optimal function approximators. Let us first formalize the notion of continuous function approximators, following the framework of DeVore et al. (1989). For any Banach space X—e.g., a function space—and a subset K ⊂ X of elements to approximate, we define a continuous function approximator with N parameters as a continuous parametrization a : K → RN together with a reconstruction scheme which is a N -dimensional manifold MN : RN → X . For any element f ∈ K, the approximation given isMN (a(f)): the parametrization a is derived continuously from the function f and then given as input to the reconstruction manifold that outputs an approximation function in X . The error of this function approximator is defined as EN,a,MN (K)X := supf∈K |f −MN (a(f))|X . The best function approximator for space K minimizes this error. The minimal error for space K is given by EN (K)X = min a,MN EN,a,MN (K)X . In other terms, a continuous function approximator with N parameters cannot hope to approximate K better than withinEN (K)X . A class of function approximators is a set of function approximators with a given structure. For example, neural networks with continuous parametrizations are a class of function approximators where the number of parameters is the number of training parameters. We say that a class of function approximators is optimal for the space of functions K if it matches this minimal error asymptotically in N , within a constant multiplicative factor. In other words, the number of parameters needed by the class to approximate functions in K within matches asymptotically, within a constant, the least number of parametersN needed to satisfyEN (K)X ≤ . The norm considered in the approximation of the functions of K is the norm associated to the space X . DeVore et al. (1989) showed that this minimum error EN (K)X is lower bounded by the Bernstein width of the subset K ⊂ X , defined as bN (K)X := sup XN+1 sup{ρ : ρU(XN+1) ⊂ K}, where the outer sup is taken over all N + 1 dimensional linear sub-spaces of X , and U(Y ) denotes the unit ball of Y for any linear subspace Y of X . Theorem 5.1 (DeVore et al. (1989)). Let X a Banach space and K ⊂ X . EN (K)X ≥ bN (K)X . We prove a lower bound on the least number of parameters any class of continuous function approximators needs to approximate functions of the Korobov space. Theorem 5.2. Take X = L∞(Ω) and K = {f ∈ X2,∞(Ω) : |f |X2,∞(Ω) ≤ 1} the unit ball of the Korobov space. Then, there exists c > 0 with EN (K)X ≥ cN2 (logN) d−1. Equivalently, for > 0, a continuous function approximator approximating K within in L∞ norm uses at least Θ( − 1 2 (log 1 ) d−1 2 ) parameters. Sketch of proof We seek an appropriate subspaceXN+1 in order to get lower bound the Bernstein width bN (K)X , which in turn provides a lower bound on the approximation error (Theorem 5.1). To do so, we use the Deslaurier-Dubuc interpolet of degree 2, φ(2) (see Figure 6) which is C2. Using the sparse grids approach, we construct a hierarchical basis in X2,∞(Ω) using φ(2) as mother function and define XN+1 as the approximation space V (1) n . Here n is chosen such that the dimension of V (1) n is roughly N + 1. The goal is to estimate sup{ρ : ρU(XN+1) ⊂ K}, which will lead to a bound on bN (K)X . To do so, we upper bound the Korobov norm by the L∞ norm for elements of XN+1. For any function u ∈ XN+1 we can write u = ∑ l,i vl,i ·φl,i. Using a stencil representation of the coefficients vl,i, we are able to obtain an upper bound |u|X2,∞ ≤ Γd‖u‖∞ where Γd = O(22nnd−1). Then, bN (K)X ≥ 1/Γd which yields the desired bound. This lower bound matches within a logarithmic factor the upper bound on the number of training parameters needed by deep and shallow neural networks to approximate the Korobov space within : O( − 12 (log 1 ) 3(d−1) 2 ) (Theorem 3.1 and Theorem 4.1). It exhibits the same exponential dependence in dwith base log 1 and the same main dependence on of − 12 . Note that the upper and lower bound can be rewritten as O( −1/2−δ) for all δ > 0. Moreover, our constructions in Theorem 3.1 and Theorem 4.1 are continuous, which comes directly from the continuity of the sparse grid parameters (see bound on vl,i in Theorem 2.2). Our bounds prove therefore that deep and shallow neural networks are near optimal classes of function approximators for the Korobov space. Interestingly, the subspace XN+1 our proof uses to show the lower bound is essentially the same as the subspace we use to approximate Koborov functions in our proof of the upper bounds (Theorem 3.1 and 4.1). The difference is in the choice of the interpolate φ to construct the basis functions, degree 2 for the former (which provides needed regularity for the proof), and 1 for the later. 6 CONCLUSION AND DISCUSSION We proved new upper and lower bounds on the number of neurons and training parameters needed by shallow and deep neural networks to approximate Korobov functions. Our work shows that shallow and deep networks not only lessen the curse of dimensionality but are also near-optimal. Our work suggests several extensions. First, it would be very interesting to see if our proposed theoretical near-optimal architectures have powerful empirical performance. While commonly used structures (e.g. Convolution Neural Networks, or Recurrent Neural Networks) are motivated by properties of the data such as symmetries, our structures are motivated by theoretical insights on how to optimally approximate a large class of functions with a given number of neurons and parameters. Second, our upper bounds (Theorem 3.1 and 4.1) nearly match our lower bounds (Theorem 5.2) on the least number of training parameters needed to approximate the Korobov space. We wonder if it is possible to close the gap between these bounds and hence prove neural network’s optimality, e.g., one could prove that sparse grids are optimal function approximators by improving our lower bound to match sparse grid number of parameters O( − 12 (log 1 ) 3(d−1) 2 ). Finally, we showed the near-optimality of neural networks among the set of continuous function approximators. It would be interesting to explore lower bounds (analog to Theorem 5.2) when considering larger sets of function approximators, e.g., discontinuous function approximators. Could some discontinuous neural network construction break the curse of dimensionality for the Sobolev space? The question is then whether neural networks are still near-optimal in these larger sets of function approximators. ACKNOWLEDGMENTS The authors are grateful to Tomaso Poggio and the MIT 6.520 course teaching staff for several discussions, remarks and comments that were useful to this work. APPENDIX A ON KOROBOV FUNCTIONS In this section, we further discuss Korobov functions X2,p(Ω). Korobov functions enjoy more smoothness than Sobolev functions: smoothness for X2,p(Ω) is measured in terms of mixed derivatives of order two. Korobov functions X2,p(Ω) can be differentiated twice in each coordinates simultaneously, while Sobolev W 2,p(Ω) functions can only be differentiated twice in total. For example, in two dimensions, for a function f to be Korobov it is required to have ∂f ∂x1 , ∂f ∂x2 , ∂2f ∂2x1 , ∂2f ∂2x2 , ∂2f ∂x1∂x2 , ∂3f ∂2x1∂x2 , ∂3f ∂x1∂2x2 , ∂4f ∂2x1∂2x2 ∈ Lp(Ω), while for f to be Sobolev it requires only ∂f ∂x1 , ∂f ∂x2 , ∂2f ∂2x1 , ∂2f ∂2x2 , ∂2f ∂x1∂x2 ∈ Lp(Ω). The former can be seen from |α|∞ ≤ 2 and the latter from |α|1 ≤ 2 in the definition of Xr,p(Ω) and W r,p(Ω). We now provide intuition on why Korobov functions are easier to approximate. One of the key difficulties in approximating Sobolev functions are possible high frequency oscillations which may require an exponential number of neurons (Telgarsky, 2016). For instance, consider functions which have similar structure to W(n...n) (defined in Subsection 2.2): for any smooth basis function φ with support on the unit cube (see Figure 6 for example), consider the linear function space formed by linear combinations of dilated function φ with support on each cube d-dimensional grid of step 2−n. This corresponds exactly to the construction of W(n...n) which uses the product of hat function on each dimension as basis function φ. This function space can have strong oscillations in all directions at a time. The Korobov space prohibits such behavior by ensuring that functions can be differentiated twice on each dimension, simultaneously. As a result, functions cannot oscillate in all directions at a time without having large Korobov norm. We end this paragraph by comparing the Korobov space to the space of bandlimited functions which was shown to avoid the curse of dimensionality (Montanelli et al., 2019). These are functions for which the support frequency components is restricted to a fixed compact. Intuitively, approximating these functions can be achieved because the set of frequencies is truncated to a compact, which then allows to sample frequencies and obtain approximation guarantees. Instead of imposing a hard constraint of cutting high frequencies, the Korobov space asks for smoothness conditions which do not prohibit high frequencies but rather impose a budget for high frequency oscillations. We precise this idea in the next example. A concrete example of Korobov functions is given by an analogous of the function space V (1)n which we used as approximation space in the proof of our results (see Section 2.2). Similarly to the previous paragraph, one should use a smooth basis function to ensure differentiability. Recall that V (1) n is defined as V (1)n := ⊕ |l|1≤n+d−1 Wl. Intuitively, this approximation space introduces a ”budget” of oscillations for all dimensions through the constraint ∑d i=1 li ≤ n + d − 1. As a result, dilations of the basis function can only occur in a restricted set of directions at a time, which ensures that the Korobov norm stays bounded. B PROOFS OF SECTION 3 B.1 APPROXIMATING THE PRODUCT FUNCTION In this subsection, we construct a neural network architecture with two layers and O(d 32 − 12 log 1 ) neurons that approximates the product function p : x ∈ [0, 1]d 7−→ ∏n i=1 xi within for all > 0, which proves Proposition 3.2. We first prove a simple lemma to represent univariate piece-wise affine functions by shallow neural networks. Lemma B.1. Any one dimensional continuous piece-wise affine function with m pieces is representable exactly by a shallow neural network with ReLU activation, with m neurons on a single layer. Proof. This is a simple consequence from Proposition 1 in Yarotsky (2017). We recall the proof for completeness. Let x1 ≤ · · · ≤ xm−1 be the subdivision of the piece-wise affine function f . We use a neural network of the form g(x) := f(x1) + m−1∑ k=1 wk(x− xk)+ − w0(x1 − x)+, where w0 is the slope of f on the piece ≤ x1, w1 is the slope of f on the piece [x1, x2], wk = f(xk+1)− f(x1)− ∑k−1 i=1 wi(xk+1 − xi) xk+1 − xk , for k = 1, · · · ,m−2, and wm−1 = w̃− ∑m−2 k=1 wk where w̃ is the slope of f on the piece≥ xm−1. Notice that f and g coincide on all xk for 1 ≤ k ≤ m − 1. Furthermore, g has same slope as f on each pieces, therefore, g = f . We can approximate univariate right continuous functions by piece-wise affine functions, and then use Lemma B.1 to represent them by shallow neural networks. The following lemma shows that O( −1) neurons are sufficient to represent an increasing right-continuous function with a shallow neural network. Lemma B.2. Let f : I −→ [c, d] be a right-continuous increasing function where I is an interval, and let > 0. There exists a shallow neural network with ReLU activation, with ⌈ d−c ⌉ neurons on a single layer, that approximates f within for the infinity norm. Proof. Let m = ⌊ d−c ⌋ Define a subdivision of the image interval c ≤ y1 ≤ . . . ≤ ym ≤ d where yk = c+k for k = 1, . . . ,m. Note that this subdivision contains exactly ⌈ d−c ⌉ pieces. Now define a subdivision of I , x1 ≤ x2 ≤ . . . ≤ xm by xk := sup{x ∈ I, f(x) ≤ yk}, for k = 1, . . . ,m. This subdivision stills has ⌈ d−c ⌉ pieces. We now construct our approximation function f̂ on I as the continuous piece-wise affine function on the subdivision x1 ≤ . . . ≤ xm such that f̂(xk) = yk for all 1 ≤ k ≤ m and f̂ is constant before x1 and after xm (see Figure 4). Let x ∈ I . • If x ≤ x1, because f is increasing and right-continuous, c ≤ f(x) ≤ f(x1) ≤ y1 = c+ . Therefore |f(x)− f̂(x)| = |f(x)− (c+ )| ≤ . • If xk < x ≤ xk+1, we have yk < f(x) ≤ f(xk+1) ≤ yk+1. Further note that yk ≤ f̂(x) ≤ yk+1. Therefore |f(x)− f̂(x)| ≤ yk+1 − yk = . • If xm < x, then ym < f(x) ≤ d. Again, |f(x)− f̂(x)| = |f(x)− ym| ≤ d− ym ≤ . Therefore ‖f − f̂‖∞ ≤ . We can now use Lemma B.1 to end the proof. If the function to approximate has some regularity, the number of neurons needed for approximation can be significantly reduced. In the following lemma, we show that O( − 12 ) neurons are sufficient to approximate a C2 univariate function with a shallow neural network. Lemma B.3. Let f : [a, b] −→ [c, d] ∈ C2, and let > 0. There exists a shallow neural network with ReLU activation, with 1√ 2 min( ∫ √ |f ′′|(1 + µ(f, )), (b − a) √ ‖f ′′‖∞) neurons on a single layer, where µ(f, )→ 1 as → 0, that approximates f within for the infinity norm. Proof. See Appendix B.2.1. We will now use the ideas of Lemma B.2 and Lemma B.3 to approximate a truncated log function, which we will use in the construction of our neural network approximating the product. Corollary B.4. Let > 0 sufficiently small and δ > 0. Consider the truncated logarithm function log : [δ, 1] −→ R. There exists a shallow neural network with ReLU activation, with − 12 log 1δ neurons on a single layer, that approximates f within for the infinity norm. Proof. See Appendix B.2.2. We are now ready to construct a neural network approximating the product function and prove Proposition 3.2. The proof builds upon on the observation that ∏d i=1 xi = exp( ∑d i=1 log xi). We construct an approximating 2-layer neural network where the first layer computes log xi for 1 ≤ i ≤ d, and the second layer computes the exponential. We illustrate the construction of the proof in Figure 2. Proof of Proposition 3.2 Fix > 0. Consider the function h : x ∈ [0, 1] 7→ max(log x, log ) ∈ [log , 0]. Using Corollary B.4, there exists a neural network ĥ : [0, 1] −→ [log , 0] with 1 + dd 12 − 12 log 1 e neurons on a single layer such that ‖h − ĥ ‖∞ ≤ d . Indeed, one can take the -approximation of h : x ∈ [ , 1] 7→ log x ∈ [log , 0], then extend this function to [0, ] with a constant equal to log . The resulting piece-wise affine function has one additional segment corresponding to one additional neuron in the approximating function. Similarly, consider the exponential g : x ∈ R− 7→ ex ∈ [0, 1]. Because g is right-continuous increasing, we can use Lemma B.3 to construct a neural network ĝ : R− −→ [0, 1] with 1 + ⌈ 1√ 2 log 1 ⌉ neurons on a single layer such that ‖g − ĝ ‖∞ ≤ . Indeed, again one can take the -approximation of g : x ∈ [log , 0] 7→ ex ∈ [0, 1], then extend this function to (−∞, log ] with a constant equal to . The corresponding neural network has an additional neuron. We construct our final neural network φ̂ (see Figure 2) as φ̂ = ĝ ( d∑ i=1 ĥ (xi) ) . Note that φ̂ can be represented as a 2-layer neural network: the first layer is composed of the union of the 1+dd 12 − 12 log 1 e neurons composing each of the 1-layer neural networks ĥ i : x ∈ [0, 1]d 7→ ĥ (xi) ∈ R for each dimension i ∈ {1, . . . , d}. The second layer is composed of the 1+ ⌈ 1√ 2 log 1 ⌉ neurons of ĝ . Hence, the constructed neural network φ̂ has O(d 3 2 − 1 2 log 1 ) neurons. Let us now analyze the approximation error. Let x ∈ [0, 1]d. For the sake of brevity, denote ŷ = ∑d i=1 ĥ (xi) and y = ∑d i=1 log(xi). We have, |φ̂ (x)− p(x)| ≤ |φ̂ (x)− exp(ŷ)|+ | exp(ŷ)− exp(y)| ≤ + d∏ i=1 xi · | exp(ŷ − y)− 1|, where we used the fact that |φ̂ (x)− exp(ŷ)| = |ĝ (ŷ)− g(ŷ)| ≤ ‖ĝ − g‖∞ ≤ . First suppose that x ≥ . In this case, for all i ∈ {1, . . . , d} we have |ĥ (xi)− log(xi)| = |ĥ (xi)− h (xi)| ≤ d . Then, |ŷ−y| ≤ . Consequently, |φ̂ (x)−p(x)| ≤ +max(|e −1|, |e− −1|) ≤ 3 , for > 0 sufficiently small. Without loss of generality now suppose x1 ≤ . Then ŷ ≤ h (x1) ≤ log , so by definition of ĝ , we have 0 ≤ φ̂ (x) = ĝ (ŷ) ≤ exp(log ) = . Also, 0 ≤ p(x) ≤ so finally |φ̂ (x)− p(x)| ≤ . Remark B.5. Note that using Lemma B.2 instead of Lemma B.3 to construct approximating shallow networks for log and exp would yield approximation functions ĥ withO( ⌈ d log 1 ⌉ ) neurons and ĝ with O( ⌈ 1 ⌉ ) neurons. Therefore, the corresponding neural network would approximate the product p with O(d2 −1 log 1 ) neurons. B.2 MISSING PROOFS OF SECTION B.1 B.2.1 PROOF OF LEMMA B.3 Proof. Similarly as the proof of Lemma B.2, the goal is to approximate f by a piece-wise affine function f̂ defined on a subdivision x0 = a ≤ x1 ≤ . . . ≤ xm ≤ xm+1 = b such that f and f̂ coincide on x0, . . . , xm+1. We first analyse the error induced by a linear approximation of the function on each piece. Let x ∈ [u, v] for u, v ∈ I . Using the mean value theorem, there exists αx ∈ [u, x] such that f(x) − f(u) = f ′(αx)(x − u) and βx ∈ [x, v] such that f(v) − f(x) = f ′(βx)(v − x). Combining these two equalities, we get, f(x)− f(u)− (x− u)f(v)− f(u) v − u = (v − x)(f(x)− f(u))− (x− u)(f(v)− f(x)) v − u = (x− u)(x− v)f ′(βx)− f ′(αx) v − u = (x− u)(v − x) ∫ βx αx f ′′(t)dt v − u Hence, f(x) = f(u) + (x− u)f(v)− f(u) v − u + (x− u)(v − x) ∫ βx αx f ′′(t)dt v − u . (1) We now apply this result to bound the approximation error on each pieces of the subdivision. Let k ∈ [m]. Recall f̂ is linear on the subdivision [xk, xk+1] and f̂(xk) = f(xk) and f̂(xk+1) = f(xk+1). Hence, for all x ∈ [xk, xk+1], f̂(x) = f(xk) + (x− xk) f(xk+1)−f(xk)xk+1−xk . Using Equation equation 1 with u = xk and v = xk+1, we get, ‖f − f̂‖∞,[xk,xk+1] ≤ sup x∈[xk,xk+1] ∣∣∣∣∣(x− xk)(xk+1 − x) ∫ βx αx f ′′(t)dt xk+1 − xk ∣∣∣∣∣ ≤ 1 2 (xk+1 − xk) ∫ xk+1 xk |f ′′(t)|dt ≤ 1 2 (xk+1 − xk)2‖f ′′‖∞,[xk,xk+1]. Therefore, using a regular subdivision with step √ 2 ‖f ′′‖∞ yields an -approximation of f with⌈ (b−a) √ ‖f ′′‖∞√ 2 ⌉ pieces. We now show that for any µ > 0, there exists an -approximation of f with at most ∫ √ |f ′′|√ 2 (1 + µ) pieces. To do so, we use the fact that the upper Riemann sum for √ f ′′ converges to the integral since √ f ′′ is continuous on [a, b]. First define a partition a = X0 ≤ XK = b of [a, b] such that the upper Riemann sum R( √ f ′′) on this subdivision satisfies R( √ f ′′) ≤ (1 + µ/2) ∫ b a √ f ′′. Now define on each interval Ik of the partition a regular subdivision with step √ 2 ‖f ′′‖Ik as before. Finally, consider the subdivision union of all these subdivisions, and construct the approximation f̂ on this final subdivision. By construction, ‖f − f̂‖∞ ≤ because the inequality holds on each piece of the subdivision. Further, the number of pieces is K−1∑ i=0 1 + (Xi+1 −Xi) sup[Xi,Xi+1] √ f ′′ √ 2 = R( √ f ′′)√ 2 +K ≤ ∫ √ |f ′′|√ 2 (1 + µ), for > 0 small enough. Using Lemma B.1 we can complete the proof. and O( −1 ( log 1 ) 3(d−1) 2 +1) neurons on the second layer. B.2.2 PROOF OF COROLLARY B.4 Proof. In view of Lemma B.3, the goal is to show that we can remove the dependence of µ(f, ) in δ. This essentially comes from the fact that the upper Riemann sum behaves well for approximating log. Consider the subdivision x0 := δ ≤ x1 ≤ . . . ≤ xm ≤ xm+1 := 1 with m = ⌊ 1 ̃ log 1 δ ⌋ where ̃ := log(1 + √ 2 ), such that xk = elog δ+k̃, for k = 0, . . . ,m − 1. Denote f̂ the corresponding piece-wise affine approximation. Similarly to the proof of Lemma B.3, for k = 0, . . . ,m− 1, ‖ log−f̂‖∞,[xk,xk+1] ≤ 1 2 (xk+1 − xk)2‖f ′′‖∞,[xk,xk+1] ≤ (ẽ − 1)2 2 ≤ . The proof follows. B.3 PROOF OF THEOREM 3.1: APPROXIMATING THE KOROBOV SPACE X2,∞(Ω) In this subsection, we prove Theorem 3.1 and show that we can approximate any Korobov function f ∈ X2,∞(Ω) within with a 2-layer neural network ofO( − 12 (log 1 ) 3(d−1) 2 ) neurons. We illustrate the construction in Figure 5. Our proof combines the constructed network approximating the product function and a decomposition of f as a sum of separable functions, i.e. a decomposition of the form f(x) ≈ K∑ k=1 d∏ j=1 φ (k) j (xj), ∀x ∈ [0, 1] d. Consider the sparse grid construction of the approximating space V (1)n using the standard hat function as mother function to create the hierarchical basisWl (introduced in Section 2.2). We recall that the approximation space is defined as V (1)n := ⊕ |l|1≤n+d−1Wl. We will construct a neural network approximating the sparse grid approximation and then use the result of Theorem 2.2 to derive the approximation error. Figure 5 gives an illustration of the construction. Let f (1)n be the projection of f on the subspace V (1) n . f (1) n can be written as f (1)n (x) = ∑ (l,i)∈U(1)n vl,iφl,i(x), where U (1)n contains the indices (l, i) of basis functions present in V (1) n i.e. U (1)n := {(l, i), |l|1 ≤ n+ d− 1, 1 ≤ i ≤ 2l − 1, ij odd for all 1 ≤ j ≤ d}. (2) Throughout the proof, we explicitly construct a neural network that uses this decomposition to approximate f (1)n . We then use Theorem 2.2 and choose n carefully such that f (1) n approximates f within for L∞ norm. Note that the basis functions can be written as a product of univariate functions φl,i = ∏d j=1 φlj ,ij . We can therefore use the product approximation of Proposition 3.2 to approximate the basis functions. Specifically, we will use one layer to approximate the terms log φlj ,ij and a second layer to approximate the exponential. We now present in detail the construction of the first layer. First, recall that φlj ,ij is a piece-wise affine function with subdivision 0 ≤ ij−1 2lj ≤ ij 2lj ≤ ij+1 2lj ≤ 1. Define the error term ̃ := 2|f |2,∞ . We consider a symmetric subdivision of the interval [ ij−1+̃ 2lj , ij+1−̃ 2lj ] . We define it as follows: x0 = ij−1+̃ 2lj ≤ x1 ≤ · · · ≤ xm+1 = ij2lj ≤ xm+2 ≤ · · · ≤ x2m+2 = ij+1−̃ 2lj where m =⌊ 1 0 log 1̃ ⌋ and 0 := log(1 + √ 2̃/d), such that xk = ij − 1 + elog ̃+k 0 2lj 0 ≤ k ≤ m, xk = ij + 1− elog ̃+(2m+2−k)k 0 2lj m+ 2 ≤ k ≤ 2m+ 2. Note that with this definition, the terms log(2ljxk − ij) form a regular sequence with step 0. We now construct the piece-wise affine function ĝlj ,ij on the subdivision x0 ≤ · · · ≤ x2m+2 which coincides with log φlj ,ij on x0, · · · , x2m+2 and is constant on [0, x0] and [x2m+2, 1]. By Lemma B.1, this function can be represented by a 1-layer neural network with as much neurons as the number of pieces of ĝ, i.e. at most 2 √ 3d ̃ log 1 ̃ neurons for sufficiently small. A similar proof to that of Corol- lary B.4 shows that ĝ approximates max(log φlj ,ij , log(̃/3)) within ̃/(3d) for the infinity norm. We use this construction to compute in parallel, ̃/(3d)-approximations of max(log φlj ,ij (xj), log ̃) for all 1 ≤ j ≤ d, and 1 ≤ lj ≤ n, 1 ≤ ij ≤ 2lj where ij is odd. These are exactly the 1-dimensional functions that we will need, in order to compute the d-dimensional function basis of the approximation space V (1)n . There are d(2n − 1) such univariate functions, therefore our first layer contains at most 2n+1d √ 3d ̃ log 1 ̃ neurons. We now turn to the second layer. The result of the first two layers will be ̃/3-approximations of φl,i for all (l, i) ∈ U (1)n . Recall that U (1)n contains the indices for the functions forming a basis of the approximation space V (1)n . To do so, for each indexes (l, i) ∈ U (1)n we construct a 1-layer neural network approximating the function exp, which will compute an approximation of exp(ĝl1,i1 +· · ·+ ĝld,id). The approximation of exp is constructed in the same way as for Lemma B.3. Consider a regular subdivision of the interval [log(̃/3), 0] with step √ 2(̃/3), i.e. x0 := log(̃/3) ≤ x1 ≤ · · · ≤ xm ≤ xm+1 = 0 where m = ⌊√ 3 2̃ log 3 ̃ ⌋ , such that xk = log ̃ + k √ 2̃, 0 ≤ k ≤ m. Construct the piece-wise affine function ĥ on the subdivision x0 ≤ · · · ≤ xm+1 which coincides with exp on x0, · · · , xm+1 and is constant on (−∞, x0]. Lemma B.3 shows that ĥ approximates exp on R− within ̃ for the infinity norm. Again, Lemma B.1 gives a representation of ĥ as a 1-layer neural network with as many neurons as pieces in ĥ i.e. 1 + ⌈√ 3 2̃ log 3 ̃ ⌉ . The second layer is the union of 1-layer neural networks approximating exp within ̃/3, for each indexes (l, i) ∈ U (1)n . Therefore, the second layer contains ∣∣∣U (1)n ∣∣∣ (1 + ⌈√ 32̃ log 3̃⌉) neurons. As shown in Bungartz & Griebel (2004),∣∣∣U (1)n ∣∣∣ = n−1∑ i=0 2i· ( d− 1 + i d− 1 ) = (−1)d+2n d−1∑ i=0 ( n+ d− 1 i ) (−2)d−1−i = 2n· ( nd−1 (d− 1)! +O(nd−2) ) . Therefore, the second layer hasO ( 2n n d−1 (d−1)! ̃ − 12 log 1̃ ) neurons. Finally, the output layer computes the weighted sum of the basis functions to approximate f (1)n . Denote by f̂ (1) n the corresponding function of the constructed neural network (see Figure 5), i.e. f̂ (1)n = ∑ (l,i)∈U(1)n vl,i · ĥ d∑ j=1 ĝ(xj) . Let us analyze the approximation error of our neural network. The proof of Proposition 3.2 shows that the output of the two first layers h (∑d j=1 ĝ(·j) ) approximates φl,i within ̃. Therefore, we obtain ‖f (1)n − f̂ (1)n ‖∞ ≤ ̃ ∑ (l,i)∈U(1)n |vl,i|. We now use approximation bounds from Theorem 2.2 on f (1)n . ‖f−f̂ (1)n ‖∞ ≤ ‖f−f (1)n ‖∞+‖f (1)n −f̂ (1)n ‖∞ ≤ 2 · |f |2,∞ 8d ·2−2n ·A(d, n)+ 2|f |2,∞ ∑ (l,i)∈U(1)n |vl,i|, where ∑ (l,i)∈U(1)n |vl,i| ≤ |f |2,∞2−d ∑ i≥0 2−i · ( d− 1 + i d− 1 ) ≤ |f |2,∞. Let us now take n = min { n : 2|f |2,∞ 8d 2−2nA(d, n) ≤ 2 } . Then, using the above inequality shows that the neural network f̂ (1)n approximates f within for the infinity norm. We will now estimate the number of neurons in each layer of this network. Note that n ∼ 1 2 log 2 log 1 and 2n ≤ 4 8 d 2 (2 log 2) d−1 2 (d− 1)! 12 √ |f |2,∞ ( log 1 ) d−1 2 · (1 +O(1)). (3) We can use the above estimates to show that the constructed neural network has at most N1 (resp. N2) neurons on the first (resp. second) layer where N1 ∼ →0 8 √ 6d2 8 d 2 (2 log 2) d−1 2 d! 1 2 · |f |2,∞ ( log 1 ) d+1 2 , N2 ∼ →0 4 √ 3d 3 2 8d/2(2 log 2) 3(d−1) 2 d! 3 2 · |f |2,∞ ( log 1 ) 3(d−1) 2 +1 . This proves the bound the number of neurons. Finally, to prove the bound on the number of training parameters of the network, notice that the only parameters of the network that depend on the function f are the parameters corresponding to the weighs vl,i of the sparse grid decomposition. This number is |U (1)n | = O(2n nd−1 ) = O( − 1 2 (log 1 ) 3(d−1) 2 ). B.4 PROOF OF THEOREM 3.4: GENERALIZATION TO GENERAL ACTIVATION FUNCTIONS We start by formalizing the intuition that a sigmoid-like (resp. ReLU-like) function is a function that resembles the Heaviside (resp. ReLU) function by zooming out along the x (resp. x and y) axis. Lemma B.6. Let σ be a sigmoid-like activation with limit a (resp. b) in −∞ (resp.+∞). For any δ > 0 and error tolerance > 0, there exists a scaling M > 0 such that x 7→ σ(Mx)b−a − a approximates the Heaviside function within outside of (−δ, δ) for the infinity norm. Furthermore, this function has values in [0, 1]. Let σ be a ReLU-like activation with asymptote b · x+ c in +∞. For any δ > 0 and error tolerance > 0, there exists a scaling M > 0 such that x 7→ σ(Mx)Mb approximates the ReLU function within for the infinity norm. Proof. Let δ, > 0 and σ a sigmoid-like activation with limit a (resp. b) in −∞ (resp. +∞). There exists x0 > 0 sufficiently large such that (b−a)|σ(x)−a| ≤ for x ≤ −x0 and (b−a)|σ(x)−b| ≤ for x ≥ x0. It now suffices to take M := x0/δ to obtain the desired result. Now let σ be a ReLU-like activation with oblique asymptote bx in +∞ where b > 0. Let M such that |σ| ≤ Mb for x ≤ 0 and |σ(x) − bx| ≤ Mb for x ≥ 0. One can check that |σ(Mx)Mb | ≤ for x ≤ 0, and |σ(Mx)Mb − x| ≤ for x ≥ 0. Using this approximation, we reduce the analysis of sigmoid-like (resp. ReLU-like) activations to the case of a Heaviside (resp ReLU) activation to prove the desired theorem. Proof of Theorem 3.4 We start by the class of ReLU-like activations. Let σ be a ReLU-like activation function. Lemma B.6 shows that one can approximate arbitrarily well the ReLU activation with a linear map σ. Take the neural network approximator f̂ of a target function f given by Theorem 3.1. At each node, we can add the linear map corresponding to x 7→ σ(Mx)Mb with no additional neuron nor parameter. Because the approximation is continuous, we can take M > 0 arbitrarily large in order to approximate f̂ with arbitrary precision on the compact [0, 1]d. The same argument holds for sigmoid-like activation functions in order to reduce the problem to Heaviside activation functions. Although quadratic approximations for univariate functions similar to Lemma B.3 are not valid for general sigmoid-like activations – in particular the Heaviside — we can obtain an analog to Lemma B.2 as Lemma B.7 given in the Appendix B.4.1. This results is an increased number of neurons. In order to approximate a target function f ∈ X2,∞(Ω), we use the same structure as the neural network constructed for ReLU activations and use the same notations as in the proof of Theorem 3.1. The first difference lies in the approximation of log φlj ,ij in the first layer. Instead of using Corollary B.4, we use Lemma B.7. Therefore, 12d̃ log 3 ̃ neurons are needed to compute a ̃/(3d)-approximation of max(log φlj ,ij , log(̃/3)). The second difference is in the approximation of the exponential in the second layer. Again, we use Lemma B.7 to construct a ̃/3-approximation of the exponential on R− with 6̃ neurons for the second layer. As a result, the first layer contains at most 2n+2 3d 2 ̃ log 1 ̃ neurons for sufficiently small, and the second layer contains ∣∣∣U (1)n ∣∣∣ 6̃ neurons. Using the same estimates as in the proof of Theorem 3.1 shows that the constructed neural network has at mostN1 (resp. N2) neurons on the first (resp. second) layer where N1 ∼ →0 3 · 25 · d5/2 8 d 2 (2 log 2) d−1 2 d! 1 2 |f | 3 2 2,∞ 3 2 ( log 1 ) d+1 2 , N2 ∼ →0 24 · d 32 8 d 2 (2 log 2) 3(d−1) 2 d! 3 2 · |f | 3 2 2,∞ 3 2 ( log 1 ) 3(d−1) 2 . This ends the proof. B.4.1 PROOF OF LEMMA B.7 Lemma B.7. Let σ be a sigmoid-like activation. Let f : I −→ [c, d] be a right-continuous increasing function where I is an interval, and let > 0. There exists a shallow neural network with activation σ, with at most 2d−c neurons on a single layer, that approximates f within for the infinity norm. Proof. The proof is analog to that of Lemma B.2. Let m = bd−c c. We define a regular subdivision of the image interval c ≤ y1 ≤ . . . ≤ ym ≤ d where yk = c + k for k = 1, . . . ,m, then using the monotony of f , we can define a subdivision of I , x1 ≤ . . . ≤ xm such that xk := sup{x ∈ I, f(x) ≤ yk}. Let us first construct an approximation neural network f̂ with the Heaviside activation. Consider f̂(x) := y1 + m−1∑ i=1 1 ( x− xi + xi+1 2 ≥ 0 ) . Let x ∈ [c, d] and k such that x ∈ [xk, xk+1]. We have by monotony yk ≤ f(x) ≤ yk+1 and yk = y1 + (k − 1) ≤ f̂(x) ≤ y1 + k = yk+1. Hence, f̂ approximates f within in infinity norm. Let δ < mini=1,...,m(xi+1−xi)/4 and σ a general sigmoid-like activation with limits a in−∞ and b in +∞. Take M given by Lemma B.7 such that σ(Mx)b−a − a approximates the Heaviside function within 1/m outside of (−δ, δ) and has values in [0, 1]. Using the same arguments as above, the function f̂(x) := y1 + m−1∑ i=1 σ ( Mx−M xi+xi+12 ) b− a − a approximates f within 2 for the infinity norm. The proof follows. C PROOFS OF SECTION 4 C.1 PROOF OF THEOREM 4.1: APPROXIMATING KOROBOV FUNCTIONS WITH DEEP NEURAL NETWORKS Let > 0. We construct a similar structure to the network defined in Theorem 3.1 by using the sparse grid approximation of Subsection 2.2. For a given n, let f (1)n be the projection of f in the approximation space V (1)n (defined in Subsection 2.2) and U (1) n (defined in equation 2) the set of indices (l, i) of basis functions present in V (1)n . Recall f (1) n can be uniquely decomposed as f (1)n (x) = ∑ (l,i)∈U(1)n vl,iφl,i(x). where φl,i = ∏d j=1 φlj ,ij are the basis functions defined in Subsection 2.2. In the first layer, we compute exactly the piece-wise linear hat functions φlj ,ij , then in the next set of layers, we use the product-approximating neural network given by Proposition 4.2 to compute the basis functions φl,i = ∏d j=1 φlj ,ij (see Figure 3). The output layer computes the weighted sum∑ (l,i)∈U(1)n vl,iφl,i(x) and outputs f (1) n . Because the approximation has arbitrary precision, we can chose the network of Proposition 4.2 such that the resulting network f̂ verifies ‖f̂ − f (1)n ‖∞ ≤ /2. More precisely, as φlj ,ij is piece-wise linear with four pieces, we can compute it exactly with four neurons with ReLU activation on a single layer (Lemma B.1). Our first layer is composed of the union of all these ReLU neurons, for the d(2n − 1) indices lj , ij such that 1 ≤ j ≤ d, 1 ≤ lj ≤ n, 1 ≤ ij ≤ 2lj and ij is odd. Therefore, it contains at most d2n+2 neurons with ReLU activation. The second set of layers is composed of the union of product-approximating neural networks to compute φl,i for all (l, i) ∈ U (1)n . This set of layers contains dlog2 de layers with activation σ and at most |U (1)n | · 8d neurons. The output of these two sets of layers is an approximation of the basis functions φl,i with arbitrary precision. Consequently, the final output of the complete neural network is an approximation of f (1)n with arbitrary precision. Similarly to the proof of Theorem 3.1, we can chose the smallest n such that ‖f − f (1)n ‖∞ ≤ /2 (see equation 3 for details). Finally, the network has depth at most log2 d+ 2 and N neurons where N = 8d|U (1)n | ∼ →0 25 · d5/2 8 d 2 (2 log 2) 3(d−1) 2 d! 3 2 · √ |f |2,∞ ( log 1 ) 3(d−1) 2 . The parameters of the network depending on the function are exactly the coefficients vl,i of the sparse grid approximation. Hence, the network has O( − 12 (log 1 ) 3(d−1) 2 ) training parameters. D PROOFS OF SECTION 5 D.1 PROOF OF THEOREM 5.2: NEAR-OPTIMALITY OF NEURAL NETWORKS FOR KOROBOV FUNCTIONS Our goal is to define an appropriate subspace XN+1 in order to get a good lower bound on the Bernstein width bN (K)X , defined in equation 5, which in turn provides a lower bound on the approximation error (Theorem 5.1). To do so, we introduce the Deslaurier-Dubuc interpolet φ
1. What is the focus of the paper regarding neural networks and their efficiency in approximating certain function classes? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its theoretical contributions and practical relevance? 3. Do you have any concerns about the choice of function space and its relation to deep learning theory? 4. How does the reviewer assess the novelty and suitability of the paper for different venues? 5. Are there any questions regarding the technical aspects and construction of the upper bound?
Summary Of The Paper Review
Summary Of The Paper This paper studies what function class can be efficiently approximated by neural networks. This paper focuses on a special function class, namely the Korobov function space X 2 , ∞ , which contains function with L ∞ bounded weak α -order derivative, where | α | ∞ ≤ 2 . This is a local constraint on the local smoothness of the function space. This paper shows that shallow (with depth= 2 ) and deep neural networks can efficiently approximate X 2 , ∞ , and the number of parameters does not scale with ε − poly ( d ) , thus efficiently escaping the curse of dimensionality. Furthermore, this paper shows the optimality of their parameters bound by showing a matching lower bound. Review Strengths The paper gives matching upper bound and lower bound for the X 2 , ∞ space. Weakness The space seems far away from the practice of deep learning, and the contribution is mainly theoretical. The bound still has dependency on ( log ⁡ ϵ − 1 ) O ( d ) , which is still exponential. (Note that log ⁡ ϵ − 1 is about the number of bits to represent a float point number with ϵ accuracy.) This makes it less motivated to study this function space. The approximation issue is not the main concern in deep learning theory. Deep neural network merits not only because they are universal function approximators, but also, and more importantly, they can be efficiently trained and can generalize well. The optimization and generalization issues are the main concerns, for which this paper does not provide much insight. This paper looks tangential to the community of ICLR. Maybe this paper is more suitable to venues in applied math or numerical analysis? The technical novelty is rather limited, and the techniques looks to be rather limited to this specific problem of approximating Koborov space. The upper bound construction seems to be based on a simple interpolation argument. It is strange why the previous paper (Montanelli & Du, 2019) did not achieve this bound. Maybe the authors could highlight the main technical challenges? The techniques in this paper seem to be ad-hoc for the Korobov space and do not provide much insight for the The authors should remove Appendices E-K when finalizing their paper.
ICLR
Title Shallow and Deep Networks are Near-Optimal Approximators of Korobov Functions Abstract In this paper, we analyze the number of neurons and training parameters that a neural network needs to approximate multivariate functions of bounded second mixed derivatives — Korobov functions. We prove upper bounds on these quantities for shallow and deep neural networks, drastically lessening the curse of dimensionality. Our bounds hold for general activation functions, including ReLU. We further prove that these bounds nearly match the minimal number of parameters any continuous function approximator needs to approximate Korobov functions, showing that neural networks are near-optimal function approximators. 1 INTRODUCTION Neural networks have known tremendous success in many applications such as computer vision and pattern detection (Krizhevsky et al., 2017; Silver et al., 2016). A natural question is how to explain their practical success theoretically. Neural networks are shown to be universal (Hornik et al., 1989): any Borel-measurable function can be approximated arbitrarily well by a neural network with sufficient number of neurons. Furthermore, universality holds for as low as 1-hidden-layer neural network with reasonable activation functions. However, these results do not specify the needed number of neurons and parameters to train. If these numbers are unreasonably high, the universality of neural networks would not explain their practical success. We are interested in evaluating the number of neurons and training parameters needed to approximate a given function within with a neural network. An interesting question is how do these numbers scale with and the dimensionality of the problem, i.e., the number of variables. Mhaskar (1996) showed that any function of the Sobolev space of order r and dimension d can be approximated within with a 1-layer neural network with O( − dr ) neurons and an infinitely differentiable activation function. This bound exhibits the curse of dimensionality: the number of neurons needed for an −approximation scales exponentially in the dimension of the problem d. Thus, Mhaskar’s bound raises the question of whether this curse is inherent to neural networks. Towards answering this question, DeVore et al. (1989) proved that any continuous function approximator (see Section 5) that approximates all Sobolev functions of order r and dimension d within , needs at least Θ( − d r ) parameters. This result meets Mhaskar’s bound and confirms that neural networks cannot escape the curse of dimensionality for the Sobolev space. A main question is then for which set of functions can neural networks break this curse of dimensionality. One way to circumvent the curse of dimensionality is to restrict considerably the considered space of functions and focus on specific structures adapted to neural networks. For example, Mhaskar et al. (2016) showed that compositional functions with regularity r can be approximated within with deep neural networks with O(d · − 2r ) neurons. Other structural constraints have been considered for compositions of functions (Kohler & Krzyżak, 2016), piecewise smooth functions (Petersen & Voigtlaender, 2018; Imaizumi & Fukumizu, 2019), or structures on the data space, e.g., lying on a manifold (Mhaskar, 2010; Nakada & Imaizumi, 2019; Schmidt-Hieber, 2019). Approximation bounds have also been obtained for the function approximation from data under smoothness constraints (Kohler & Krzyżak, 2005; Kohler & Mehnert, 2011) and specifically on mixed smooth Besov spaces which are known to circumvent the curse of dimensionality (Suzuki, 2018). Another example is the class of Sobolev functions of order d/α and dimension d for which Mhaskar’s bound becomes O( −α). Recently, Montanelli et al. (2019) considered bandlimited functions and showed that they can be approximated within by deep networks with depth O((log 1 ) 2) and O( −2(log 1 ) 2) neurons. Weinan et al. (2019) showed that the closure of the space of 2-layer neural networks with specific regularity (namely a restriction on the size of the network’s weights) is the Barron space. They further show that Barron functions can be approximated within with 2-layer networks with O( −2) neurons. Similar line of work restrict the function space with spectral conditions, to write functions as limits of shallow networks (Barron, 1994; Klusowski & Barron, 2016; 2018). In this work, we are interested in more general and generic spaces of functions. Our space of interest is the space of multivariate functions of bounded second mixed derivatives, the Korobov space. This space is included in the Sobolev space but is reasonably large and general. The Korobov space presents two motivations. First, it is a natural candidate for a large and general space included in the Sobolev space where numerical approximation methods can overcome the curse of dimensionality to some extent (see Section 2.1). Second, Korobov spaces are practically useful for solving partial differential equations (Korobov, 1959) and have been used for high-dimensional function approximation (Zenger & Hackbusch, 1991; Zenger, 1991). Recently, Montanelli & Du (2019) showed that deep neural networks with depth O(log 1 ) and O( − 12 (log 1 ) 3(d−1) 2 +1) neurons can approximate Korobov functions within , lessening the curse of dimensionality for deep neural networks asymptotically in . While they used deep structures to prove their result, the question of whether shallow neural networks also break the curse of dimensionality for the Korobov space remains open. In this paper, we study deep and shallow neural network’s approximation power for the Korobov space and make the following contributions: 1. Representation power of shallow neural networks. We prove that any Korobov function can be approximated within with a 2-layer neural network with ReLU activation function with O( −1(log 1 ) 3(d−1) 2 +1) neurons and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters (Theorem 3.1). We further extend this result to a large class of commonly used activation functions (Theorem 3.4). Asymptotically in , our bound can be written as O( −1−δ) for all δ > 0, and in that sense breaks the curse of dimensionality for shallow neural networks. 2. Representation power of deep neural networks. We show that any function of the Korobov space can be approximated within with a deep neural network of depth dlog2(d)e+ 1 independent of , with non-linear C2 activation function,O( − 12 (log 1 ) 3(d−1) 2 ) neurons andO( − 12 (log 1 ) 3(d−1) 2 ) training parameters (Theorem 4.1). This result improves that of Montanelli & Du (2019) who constructed an approximating neural network with larger depth O(log 1 log d) –increasing with – and larger number of neurons O( − 12 (log 1 ) 3(d−1) 2 +1). However, they used ReLU activation function. 3. Near-optimality of neural networks as function approximators. Under the continuous function approximators model introduced by DeVore et al. (1989), we prove that any continuous function approximator needs Θ( − 1 2 (log 1 ) d−1 2 ) parameters to approximate Korobov functions within (Theorem 5.2). This lower bound nearly matches our established upper bounds on the number of training parameters needed by deep and shallow neural networks to approximate functions of the Korobov space, proving that they are near-optimal function approximators of the Korobov space. Table 1 summarizes our new bounds and existing bounds on shallow and deep neural network approximation power for the Korobov space, Sobolev space and bandlimited functions. Our proofs are constructive and give explicit structures to construct such neural networks with ReLU and general activation functions. Our constructions rely on sparse grid approximation introduced by Zenger (1991), and studied in detail in Bungartz (1992); Bungartz & Griebel (2004). Specifically, we use the sparse grid approach to approximate smooth functions with sums of products then construct neural networks which approximate this structure. A key difficulty is to approximate the product function. In particular in the case of shallow neural networks, we propose, to the best of our knowledge, the first architecture approximating the product function with polynomial number of neurons. To derive our lower bound on the number of parameters needed to approximate the Korobov space, we construct a linear subspace of the Korobov space, with large Bernstein width. This subspace is then used to apply a general lower bound on nonlinear approximation derived by DeVore et al. (1989). The rest of the paper is structured as follows. In Section 2, we formalize our objective and introduce the sparse grids approach. In Section 3 (resp. 4), we prove our bounds on the number of neurons and training parameters for Korobov functions approximation with shallow (resp. deep) networks. Finally, we formalize in Section 5 the notion of optimal continuous function approximators and prove our novel near-optimality result. 2 PRELIMINARIES In this work, we consider feed-forward neural networks, using a linear output neuron and a nonlinear activation function σ : R→ R for the other neurons, such as the popular rectified unit (ReLU) σ(x) = max(x, 0), the sigmoid σ(x) = (1 + e−x)−1 or the Heaviside function σ(x) = 1{x≥0}. Let d ≥ 1 be the dimension of the input. We define a 1-hidden-layer network with N neurons as x 7→ ∑N k=1 ukσ(w > k x+ bk), where wk ∈ Rd, bk ∈ R for i = 1, . . . , N , are parameters. A neural network with several hidden layers is obtained by feeding the outputs of a given layer as inputs to the next layer. We study the expressive power of neural networks, i.e., the ability to approximate a target function f : Rd → R with as few neurons as possible, on the unit hyper-cube Ω := [0, 1]d. Another relevant metric is the number of parameters that need to be trained to approximate the function, i.e., the number of parameters of the approximating network (uk,wk and bk) depending on the function to approximate. We will adopt L∞ norm as a measure of approximation error. We now define some notations necessary to introduce our function spaces of interest. For an integer r, we denote Cr the space of one dimensional functions differentiable r times and with continuous derivatives. In our analysis, we consider functions f with bounded mixed derivatives. For a multi-index α ∈ Nd, the derivative of order α is Dαf := ∂ |α|1f ∂x α1 1 ...∂x αd d , where |α|1 = ∑d i=1 |αi|. Two common function spaces in a compact Ω ⊂ Rd are the Sobolev spaces W r,p(Ω) of functions having weak partial derivatives up to order r in Lp(Ω) and the Korobov spacesXr,p(Ω) of functions vanishing at the boundary and having weak mixed second derivatives up to order r in Lp(Ω). W r,p(Ω) = {f ∈ Lp(Ω) : Dαf ∈ Lp(Ω), |α|1 ≤ r}, Xr,p(Ω) = {f ∈ Lp(Ω) : f |∂Ω = 0, Dαf ∈ Lp(Ω), |α|∞ ≤ r}. where ∂Ω denotes the boundary of Ω, |α|1 = ∑d i=1 |αi| and |α|∞ = supi=1,...,d |αi| are respectively the L1 and infinity norm. Note that Korobov spaces Xr,p(Ω) are subsets of Sobolev spaces W r,p(Ω). For p =∞, the usual norms on these spaces are given by |f |W r,p(Ω) := max |α|1≤r ‖Dαf‖∞ , |f |Xr,p(Ω) := max|α|∞≤r ‖Dαf‖∞ , For simplicity, we will write | · |2,∞ for | · |X2,∞ . We focus our analysis on approximating functions on the Korobov space X2,∞(Ω) for which the curse of dimensionality is drastically lessened and we show that neural networks are near-optimal. Intuitively, a key difference compared to the Sobolev space is that Korobov functions do not have high frequency oscillations in all directions at a time. Such functions may require an exponential number of neurons Telgarsky (2016) and are one of the main difficulties for Sobolev space approximation, which therefore exhibits the curse of dimensionality DeVore et al. (1989). On the contrary, the Korobov space prohibits such behaviour by ensuring that functions can be differentiable on all dimensions together. Further discussions and concrete examples are given in Appendix A. 2.1 THE CURSE OF DIMENSIONALITY We adopt the point of view of asymptotic results in (or equivalently, in the number of neurons), which is a well-established setting in the neural networks representation power literature (Mhaskar, 1996; Bungartz & Griebel, 2004; Yarotsky, 2017; Montanelli & Du, 2019) and numerical analysis literature (Novak, 2006). In the rest of the paper, we use O notation which hide constants in d. For each result, full dependencies in d are provided in appendix. Previous efforts to quantify the number of neurons needed to approximate large general class of functions, showed that neural networks and most classical functional approximation schemes exhibit the curse of dimensionality. For example, for Sobolev functions, Mhaskar proved the following approximation bound. Theorem 2.1 (Mhaskar (1996)). Let p, r ≥ 1, and σ : R → R be an infinitely differentiable activation function, non-polynomial on any interval of R. Let > 0 sufficiently small. For any f ∈ W r,p, there exists a shallow neural network with one hidden layer, activation function σ, and O( − dr ) neurons approximating f within for the infinity norm. Therefore, the approximation of Sobolev functions by neural networks suffers from the curse of dimensionality since the number of neurons needed grows exponentially with the input space dimension d. This curse is not due to poor performance of neural networks but rather to the choice of the Sobolev space. DeVore et al. (1989) proved that any learning algorithm with continuous parameters needs at least Θ( − d r ) parameters to approximate the Sobolev spaceW r,p. This shows that the class of Sobolev functions suffers inherently from the curse of dimensionality and no continuous function approximator can overcome it. We detail this notion later in Section 5. The natural question is whether there exists a reasonable and sufficiently large class of functions for which there is no inherent curse of dimensionality. Instead of the Sobolev space, we aim to add more regularity to overcome the curse of dimensionality while preserving a reasonably large space. The Korobov space X2,∞(Ω)—functions with bounded mixed derivatives—is a natural candidate: it is known in the numerical analysis community as a reasonably large space where numerical approximation methods can lessen the curse of dimensionality (Bungartz & Griebel, 2004). Korobov functions have been introduced for solving partial differential equations (Korobov, 1959; Smolyak, 1963), and have then been used extensively for high-dimensional function approximation (Zenger & Hackbusch, 1991; Bungartz & Griebel, 2004). This space of functions is included in the Sobolev space, but still reasonably large as the regularity condition concerns only second order derivatives. Two questions are of interest. First, how many neurons and training parameters a neural network needs to approximate any Korobov function within in the L∞ norm? Second, how do neural networks perform compared to the optimal theoretical rates for Korobov spaces? 2.2 SPARSE GRIDS AND HIERARCHICAL BASIS In this subsection, we introduce sparse grids which will be key in our neural networks constructions. These were introduced by Zenger (1991) and extensively used for high-dimensional function approximation. We refer to Bungartz & Griebel (2004) for a thorough review of the topic. The goal is to define discrete approximation spaces with basis functions. Instead of a classical uniform grid partition of the hyper-cube [0, 1]d involving nd components, where n is the number of partitions in each coordinate, the sparse grid approach uses a smarter partitioning of the cube preserving the approximation accuracy while drastically reducing the number of components of the grid. The construction involves a 1-dimensional mother function φ which is used to generate all the functions of the basis. For example, a simple choice for the building block φ is the standard hat function φ(x) := (1 − |x|)+. The hat function is not the only possible choice. In the latter proofs we will specify which mother function is used, in our case either the interpolates of Deslauriers & Dubuc (1989) (which we define rigorously later in our proofs) or the hat function φ which can be seen as the Deslaurier-Dubuc interpolet of order 1. These more elaborate mother functions enjoy more smoothness while essentially preserving the approximation power. Assume the mother function has support in [−k, k]. For j = 1, . . . , d, it can be used to generate a set of local functions φlj ,ij : [0, 1] −→ R for all lj ≥ 1 and 1 ≤ ij ≤ 2lj −1 with support [ ij−k 2lj , ij+k 2lj ] as follows, φlj ,ij (x) := φ(2 ljx− ij), x ∈ [0, 1]. We then define a basis of d-dimensional functions by taking the tensor product of these 1-dimensional functions. For all l, i ∈ Nd with l ≥ 1 and 1 ≤ i ≤ 2l − 1 where 2l denotes (2l1 , . . . , 2ld), define φl,i(x) := ∏d j=1 φlj ,ij (xj), x ∈ Rd. For a fixed l ∈ Nd, we will consider the hierarchical increment space Wl which is the subspace spanned by the functions {φl,i : 1 ≤ i ≤ 2l − 1}, as illustrated in Figure 1, Wl := span{φl,i, 1 ≤ i ≤ 2l − 1, ij odd for all 1 ≤ j ≤ d}. Note that in the hierarchical increment Wl, all basis functions have disjoint support. Also, Korobov functions X2,p(Ω) can be expressed uniquely in this hierarchical basis. Precisely, there is a unique representation of u ∈ X2,p(Ω) as u(x) = ∑ l,i vl,iφl,i(x), where the sum is taken over all multiindices l ≥ 1 and 1 ≤ i ≤ 2l − 1 where all components of i are odd. In particular, all basis functions are linearly independent. Notice that this sum is infinite, the objective is now to define a finite-dimensional subspace of X2,p(Ω) that will serve as an approximation space. Sparse grids use a carefully chosen subset of the hierarchical basis functions to construct the approximation space V (1) n := ⊕ |l|1≤n+d−1Wl. When φ is the hat function, Bungartz and Griebel Bungartz & Griebel (2004) showed that this choice of approximating space leads to a good approximating error. Theorem 2.2 (Bungartz & Griebel (2004)). Let f ∈ X2,∞(Ω) and f (1)n be the projection of f on the subspace V (1)n . We have, ‖f−f (1)n ‖∞ = O ( 2−2nnd−1 ) . Furthermore, if vl,i denotes the coefficient of φl,i in the decomposition of f (1) n in V (1) n , then we have the upper bound |vl,i| ≤ 2−d2−2|l|1 |f |2,∞, for all l, i ∈ Nd with |l|1 ≤ n+ d− 1, 1 ≤ i ≤ 2l − 1 where i has odd components. 3 THE REPRESENTATION POWER OF SHALLOW NEURAL NETWORKS It has recently been shown that deep neural networks, with depth scaling with , lessen the curse of dimensionality on the numbers of neuron needed to approximate the Korobov space Montanelli & Du (2019). However, to the best of our knowledge, the question of whether shallow neural networks with fixed universal depth — independent of and d — escape the curse of dimensionality as well for the Korobov space remains open. We settle this question by proving that shallow neural networks also lessen the curse of dimensionality for Korobov space. Theorem 3.1. Let > 0. For all f ∈ X2,∞(Ω), there exists a neural network with 2 layers, ReLU activation, O( −1(log 1 ) 3(d−1) 2 +1) neurons, and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters that approximates f within for the infinity norm. In order to prove Theorem 3.1, we construct the approximating neural network explicitly. The first step is to construct a neural network architecture with two layers and O(d 32 − 12 log 1 ) neurons that approximates the product function p : x ∈ [0, 1]d 7−→ ∏n i=1 xi within for all > 0. Proposition 3.2. For all > 0, there exists a neural network with depth 2, ReLU activation and O(d 32 − 12 log 1 ) neurons, that approximates the product function p : x ∈ [0, 1] d −→ ∏d i=1 xi within for the infinity norm. Sketch of proof The proof builds upon the observation that p(x) = exp( ∑d i=1 log xi). We construct an approximating 2-layer neural network where the first layer approximates log xi for 1 ≤ i ≤ d, and the second layer approximates the exponential. We illustrate the construction in Figure 2. More precisely, fix > 0. Consider the function h : x ∈ [0, 1] 7→ max(log x, log ). We approximate h within d with a piece-wise affine function with O(d 1 2 − 1 2 log 1 ) pieces, then represent this piece-wise affine function with a single layer neural network ĥ with the same number of neurons as the number of pieces (Lemma B.1, Appendix B.1). This 1-layer network then has ‖h − ĥ ‖∞ ≤ d . The first layer of our final network is the union of d copies of ĥ : one for each dimension i, approximating log xi. Similarly, consider the exponential g : x ∈ R− 7→ ex. We construct a 1-layer neural network ĝ with O( − 1 2 log 1 ) neurons with ‖g − ĝ ‖∞ ≤ . This will serve as second layer. Formally, the constructed network p̂ is p̂ = ĝ (∑d i=1 ĥ (xi) ) . This 2-layer neural network has O(d 32 − 12 log 1 ) neurons and verifies ‖p̂ − p‖∞ ≤ . We use this result to prove Theorem 3.1 and show that we can approximate any Korobov function f ∈ X2,∞(Ω) within with a 2-layer neural network of O( − 12 (log 1 ) 3(d−1) 2 ) neurons. Consider the sparse grid construction of the approximating space V (1)n using the standard hat function as mother function to create the hierarchical basis Wl (introduced in Section 2.2). The key idea is to construct a shallow neural network approximating the sparse grid approximation and then use the result of Theorem 2.2 to derive the approximation error. Let f (1)n be the projection of f on the subspace V (1)n defined in Section 2.2. f (1) n can be written as f (1) n (x) = ∑ (l,i)∈U(1)n vl,iφl,i(x), where U (1)n contains the indices (l, i) of basis functions present in V (1) n . We can use Theorem 2.2 and choose n carefully such that f (1)n approximates f within for L∞ norm. The goal is now to approximate f (1)n with a shallow neural network. Note that the basis functions can be written as a product of univariate functions φl,i = ∏d j=1 φlj ,ij . We can therefore use a similar structure to the product approximation of Proposition 3.2 to approximate the basis functions. Specifically, the first layer approximate the d(2n − 1) = O( − 12 (log 1 ) d−1 2 ) terms log φlj ,ij necessary to construct the basis functions of V (1)n and a second layer to approximate the exponential in order to obtain approximations of the O(2nnd−1) = O( − 12 (log 1 ) 3(d−1) 2 ) basis functions of V (1)n . We provide a detailed figure illustrating the construction, Figure 5 in Appendix B.3. The shallow network that we constructed in Theorem 3.1 uses the ReLU activation function. We extend this result to a larger class of activation functions which include commonly used ones. Definition 3.3. A sigmoid-like activation function σ : R → R is a non-decreasing function having finite limits in ±∞. A ReLU-like activation function σ : R → R is a function having a horizontal asymptote in −∞ i.e. σ is bounded in R−, and an affine (non-horizontal) asymptote in +∞, i.e. there exists b > 0 such that σ(x)− bx is bounded in R+. Most common activation functions fall into these classes. Examples of sigmoid-like activations include Heaviside, logistic, tanh, arctan and softsign activations, while ReLU-like activations include ReLU, ISRLU, ELU and soft-plus activations. We extend Theorem 3.1 to all these activations. Theorem 3.4. For any approximation tolerance > 0, and for any f ∈ X2,∞(Ω) there exists a neural network with depth 2 and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters that approximates f within for the infinity norm, with O ( −1 log( 1 ) 3(d−1) 2 +1) (resp. O( − 32 log( 1 ) 3(d−1) 2 )) neurons for a ReLU-like (resp. sigmoid-like) activation. We note that these results can further be extended to more general Korobov spacesXr,p. Indeed, the main dependence of our neural network architectures in the parameters r and p arise from sparse grid approximation. Bungartz & Griebel (2004) show that results similar to Theorem 2.2 can be extended to various values of r, p and different error norms with a similar sparse grid construction. For instance, we can use these results combined with our proposed architecture to show that the Korobov space Xr,∞ can be approximated in infinite norm by neural networks with O( − 1 r (log 1 ) r+1 r (d−1)) training parameters and same number of neurons up to a polynomial factor in . 4 THE REPRESENTATION POWER OF DEEP NEURAL NETWORKS Montanelli & Du (2019) used the sparse grid approach to construct deep neural networks with ReLU activation, approximating Korobov functions with O( − 12 (log 1 ) 3(d−1) 2 +1) neurons, and depth O(log 1 ) for the L ∞ norm. We improve this bound for deep neural networks with C2 nonlinear activation functions. We prove that we only need O( − 12 (log 1 ) 3(d−1) 2 ) neurons and fixed depth, independent of , to approximate the unit ball of the Korobov space within in the L∞ norm. Theorem 4.1. Let σ ∈ C2 be a non-linear activation function. Let > 0. For any function f ∈ X2,∞(Ω), there exists a neural network of depth dlog2 de + 1, with ReLU activation on the first layer and activation function σ for the next layers, O( − 12 (log 1 ) 3(d−1) 2 ) neurons, and O( − 12 (log 1 ) 3(d−1) 2 ) training parameters approximating f within for the infinity norm. Compared to the bound of shallow networks in Theorem 3.1, the number of neurons for deep networks is lower by a factorO( √ ), while the number of training parameters is the same. Hence, deep neural networks are more efficient than shallow neural network in the sense that shallow networks need more “inactive” neurons to reach the same approximation power, but have the same number of parameters. This gap in the number of “inactive” neurons can be consequent in practice, as we may not know exactly which neurons to train and which neurons to fix. This new bound on the number of parameters and neurons matches the approximation power of sparse grids. In fact, sparse grids use Θ( − 1 2 (log 1 ) 3(d−1) 2 ) parameters (weights of basis functions) to approximate Korobov functions within . Our construction in Theorem 4.1 shows that deep neural networks with fixed depth in can fully encode sparse grids approximators. Neural networks are therefore more powerful function approximators. In particular, any sparse grid approximation using O(N( )) parameters, can be represented exactly by a neural network using O(N( )) neurons. The deep approximating network (see Figure 3) has a very similar structure to our construction of an approximating shallow network of Theorem 3.1. The main difference lies in the approximation of the product function. Instead of using a 2-layer neural network, we now use a deep network. The following result shows that deep neural networks can represent exactly the product function. Proposition 4.2 (Lin et al. (2017), Appendix A). Let σ be C2 non linear activation function. For any approximation error > 0, there exists a neural network with dlog2 de hidden layers and activation σ, using at most 8d neurons arranged in a binary tree network that approximates the product function ∏d i=1 xi on [0, 1] d within for the infinity norm. An important remark is that the structure of the constructed neural network is independent of . In particular, the depth and number of neurons is independent of the approximation precision , which we refer to as exact approximation. It is known that an exponential number of neurons is needed in order to exactly approximate the product function with a 1-layer neural network Lin et al. (2017), however, the question of whether one could approximate the product with a shallow network and a polynomial number of neurons, remained open. In Proposition 3.2, we answer positively to this question by constructing an -approximating neural network of depth 2 with ReLU activation and O(d 32 − 12 log 1 ) neurons. Using the same ideas as in Theorem 3.4, we can generalize this result to 2 ) neurons and depth dlog2 de+ 1. obtain an -approximating neural network of depth 2 withO(d 32 − 12 log 1 ) neurons for a ReLU-like activation, or O(d2 −1 log 1 ) neurons for a sigmoid-like activation. 5 NEURAL NETWORKS ARE NEAR-OPTIMAL FUNCTION APPROXIMATORS In the previous sections, we proved upper bounds on the number of neurons and training parameters needed by deep and shallow neural networks to approximate the Korobov space X2,∞(Ω). We now investigate how good is the performance of neural networks as function approximators. We prove a lower bound on the number of parameters needed by any continuous function approximator to approximate the Korobov space. In particular, neural networks, deep and shallow, will nearly match this lower bound, making them near-optimal function approximators. Let us first formalize the notion of continuous function approximators, following the framework of DeVore et al. (1989). For any Banach space X—e.g., a function space—and a subset K ⊂ X of elements to approximate, we define a continuous function approximator with N parameters as a continuous parametrization a : K → RN together with a reconstruction scheme which is a N -dimensional manifold MN : RN → X . For any element f ∈ K, the approximation given isMN (a(f)): the parametrization a is derived continuously from the function f and then given as input to the reconstruction manifold that outputs an approximation function in X . The error of this function approximator is defined as EN,a,MN (K)X := supf∈K |f −MN (a(f))|X . The best function approximator for space K minimizes this error. The minimal error for space K is given by EN (K)X = min a,MN EN,a,MN (K)X . In other terms, a continuous function approximator with N parameters cannot hope to approximate K better than withinEN (K)X . A class of function approximators is a set of function approximators with a given structure. For example, neural networks with continuous parametrizations are a class of function approximators where the number of parameters is the number of training parameters. We say that a class of function approximators is optimal for the space of functions K if it matches this minimal error asymptotically in N , within a constant multiplicative factor. In other words, the number of parameters needed by the class to approximate functions in K within matches asymptotically, within a constant, the least number of parametersN needed to satisfyEN (K)X ≤ . The norm considered in the approximation of the functions of K is the norm associated to the space X . DeVore et al. (1989) showed that this minimum error EN (K)X is lower bounded by the Bernstein width of the subset K ⊂ X , defined as bN (K)X := sup XN+1 sup{ρ : ρU(XN+1) ⊂ K}, where the outer sup is taken over all N + 1 dimensional linear sub-spaces of X , and U(Y ) denotes the unit ball of Y for any linear subspace Y of X . Theorem 5.1 (DeVore et al. (1989)). Let X a Banach space and K ⊂ X . EN (K)X ≥ bN (K)X . We prove a lower bound on the least number of parameters any class of continuous function approximators needs to approximate functions of the Korobov space. Theorem 5.2. Take X = L∞(Ω) and K = {f ∈ X2,∞(Ω) : |f |X2,∞(Ω) ≤ 1} the unit ball of the Korobov space. Then, there exists c > 0 with EN (K)X ≥ cN2 (logN) d−1. Equivalently, for > 0, a continuous function approximator approximating K within in L∞ norm uses at least Θ( − 1 2 (log 1 ) d−1 2 ) parameters. Sketch of proof We seek an appropriate subspaceXN+1 in order to get lower bound the Bernstein width bN (K)X , which in turn provides a lower bound on the approximation error (Theorem 5.1). To do so, we use the Deslaurier-Dubuc interpolet of degree 2, φ(2) (see Figure 6) which is C2. Using the sparse grids approach, we construct a hierarchical basis in X2,∞(Ω) using φ(2) as mother function and define XN+1 as the approximation space V (1) n . Here n is chosen such that the dimension of V (1) n is roughly N + 1. The goal is to estimate sup{ρ : ρU(XN+1) ⊂ K}, which will lead to a bound on bN (K)X . To do so, we upper bound the Korobov norm by the L∞ norm for elements of XN+1. For any function u ∈ XN+1 we can write u = ∑ l,i vl,i ·φl,i. Using a stencil representation of the coefficients vl,i, we are able to obtain an upper bound |u|X2,∞ ≤ Γd‖u‖∞ where Γd = O(22nnd−1). Then, bN (K)X ≥ 1/Γd which yields the desired bound. This lower bound matches within a logarithmic factor the upper bound on the number of training parameters needed by deep and shallow neural networks to approximate the Korobov space within : O( − 12 (log 1 ) 3(d−1) 2 ) (Theorem 3.1 and Theorem 4.1). It exhibits the same exponential dependence in dwith base log 1 and the same main dependence on of − 12 . Note that the upper and lower bound can be rewritten as O( −1/2−δ) for all δ > 0. Moreover, our constructions in Theorem 3.1 and Theorem 4.1 are continuous, which comes directly from the continuity of the sparse grid parameters (see bound on vl,i in Theorem 2.2). Our bounds prove therefore that deep and shallow neural networks are near optimal classes of function approximators for the Korobov space. Interestingly, the subspace XN+1 our proof uses to show the lower bound is essentially the same as the subspace we use to approximate Koborov functions in our proof of the upper bounds (Theorem 3.1 and 4.1). The difference is in the choice of the interpolate φ to construct the basis functions, degree 2 for the former (which provides needed regularity for the proof), and 1 for the later. 6 CONCLUSION AND DISCUSSION We proved new upper and lower bounds on the number of neurons and training parameters needed by shallow and deep neural networks to approximate Korobov functions. Our work shows that shallow and deep networks not only lessen the curse of dimensionality but are also near-optimal. Our work suggests several extensions. First, it would be very interesting to see if our proposed theoretical near-optimal architectures have powerful empirical performance. While commonly used structures (e.g. Convolution Neural Networks, or Recurrent Neural Networks) are motivated by properties of the data such as symmetries, our structures are motivated by theoretical insights on how to optimally approximate a large class of functions with a given number of neurons and parameters. Second, our upper bounds (Theorem 3.1 and 4.1) nearly match our lower bounds (Theorem 5.2) on the least number of training parameters needed to approximate the Korobov space. We wonder if it is possible to close the gap between these bounds and hence prove neural network’s optimality, e.g., one could prove that sparse grids are optimal function approximators by improving our lower bound to match sparse grid number of parameters O( − 12 (log 1 ) 3(d−1) 2 ). Finally, we showed the near-optimality of neural networks among the set of continuous function approximators. It would be interesting to explore lower bounds (analog to Theorem 5.2) when considering larger sets of function approximators, e.g., discontinuous function approximators. Could some discontinuous neural network construction break the curse of dimensionality for the Sobolev space? The question is then whether neural networks are still near-optimal in these larger sets of function approximators. ACKNOWLEDGMENTS The authors are grateful to Tomaso Poggio and the MIT 6.520 course teaching staff for several discussions, remarks and comments that were useful to this work. APPENDIX A ON KOROBOV FUNCTIONS In this section, we further discuss Korobov functions X2,p(Ω). Korobov functions enjoy more smoothness than Sobolev functions: smoothness for X2,p(Ω) is measured in terms of mixed derivatives of order two. Korobov functions X2,p(Ω) can be differentiated twice in each coordinates simultaneously, while Sobolev W 2,p(Ω) functions can only be differentiated twice in total. For example, in two dimensions, for a function f to be Korobov it is required to have ∂f ∂x1 , ∂f ∂x2 , ∂2f ∂2x1 , ∂2f ∂2x2 , ∂2f ∂x1∂x2 , ∂3f ∂2x1∂x2 , ∂3f ∂x1∂2x2 , ∂4f ∂2x1∂2x2 ∈ Lp(Ω), while for f to be Sobolev it requires only ∂f ∂x1 , ∂f ∂x2 , ∂2f ∂2x1 , ∂2f ∂2x2 , ∂2f ∂x1∂x2 ∈ Lp(Ω). The former can be seen from |α|∞ ≤ 2 and the latter from |α|1 ≤ 2 in the definition of Xr,p(Ω) and W r,p(Ω). We now provide intuition on why Korobov functions are easier to approximate. One of the key difficulties in approximating Sobolev functions are possible high frequency oscillations which may require an exponential number of neurons (Telgarsky, 2016). For instance, consider functions which have similar structure to W(n...n) (defined in Subsection 2.2): for any smooth basis function φ with support on the unit cube (see Figure 6 for example), consider the linear function space formed by linear combinations of dilated function φ with support on each cube d-dimensional grid of step 2−n. This corresponds exactly to the construction of W(n...n) which uses the product of hat function on each dimension as basis function φ. This function space can have strong oscillations in all directions at a time. The Korobov space prohibits such behavior by ensuring that functions can be differentiated twice on each dimension, simultaneously. As a result, functions cannot oscillate in all directions at a time without having large Korobov norm. We end this paragraph by comparing the Korobov space to the space of bandlimited functions which was shown to avoid the curse of dimensionality (Montanelli et al., 2019). These are functions for which the support frequency components is restricted to a fixed compact. Intuitively, approximating these functions can be achieved because the set of frequencies is truncated to a compact, which then allows to sample frequencies and obtain approximation guarantees. Instead of imposing a hard constraint of cutting high frequencies, the Korobov space asks for smoothness conditions which do not prohibit high frequencies but rather impose a budget for high frequency oscillations. We precise this idea in the next example. A concrete example of Korobov functions is given by an analogous of the function space V (1)n which we used as approximation space in the proof of our results (see Section 2.2). Similarly to the previous paragraph, one should use a smooth basis function to ensure differentiability. Recall that V (1) n is defined as V (1)n := ⊕ |l|1≤n+d−1 Wl. Intuitively, this approximation space introduces a ”budget” of oscillations for all dimensions through the constraint ∑d i=1 li ≤ n + d − 1. As a result, dilations of the basis function can only occur in a restricted set of directions at a time, which ensures that the Korobov norm stays bounded. B PROOFS OF SECTION 3 B.1 APPROXIMATING THE PRODUCT FUNCTION In this subsection, we construct a neural network architecture with two layers and O(d 32 − 12 log 1 ) neurons that approximates the product function p : x ∈ [0, 1]d 7−→ ∏n i=1 xi within for all > 0, which proves Proposition 3.2. We first prove a simple lemma to represent univariate piece-wise affine functions by shallow neural networks. Lemma B.1. Any one dimensional continuous piece-wise affine function with m pieces is representable exactly by a shallow neural network with ReLU activation, with m neurons on a single layer. Proof. This is a simple consequence from Proposition 1 in Yarotsky (2017). We recall the proof for completeness. Let x1 ≤ · · · ≤ xm−1 be the subdivision of the piece-wise affine function f . We use a neural network of the form g(x) := f(x1) + m−1∑ k=1 wk(x− xk)+ − w0(x1 − x)+, where w0 is the slope of f on the piece ≤ x1, w1 is the slope of f on the piece [x1, x2], wk = f(xk+1)− f(x1)− ∑k−1 i=1 wi(xk+1 − xi) xk+1 − xk , for k = 1, · · · ,m−2, and wm−1 = w̃− ∑m−2 k=1 wk where w̃ is the slope of f on the piece≥ xm−1. Notice that f and g coincide on all xk for 1 ≤ k ≤ m − 1. Furthermore, g has same slope as f on each pieces, therefore, g = f . We can approximate univariate right continuous functions by piece-wise affine functions, and then use Lemma B.1 to represent them by shallow neural networks. The following lemma shows that O( −1) neurons are sufficient to represent an increasing right-continuous function with a shallow neural network. Lemma B.2. Let f : I −→ [c, d] be a right-continuous increasing function where I is an interval, and let > 0. There exists a shallow neural network with ReLU activation, with ⌈ d−c ⌉ neurons on a single layer, that approximates f within for the infinity norm. Proof. Let m = ⌊ d−c ⌋ Define a subdivision of the image interval c ≤ y1 ≤ . . . ≤ ym ≤ d where yk = c+k for k = 1, . . . ,m. Note that this subdivision contains exactly ⌈ d−c ⌉ pieces. Now define a subdivision of I , x1 ≤ x2 ≤ . . . ≤ xm by xk := sup{x ∈ I, f(x) ≤ yk}, for k = 1, . . . ,m. This subdivision stills has ⌈ d−c ⌉ pieces. We now construct our approximation function f̂ on I as the continuous piece-wise affine function on the subdivision x1 ≤ . . . ≤ xm such that f̂(xk) = yk for all 1 ≤ k ≤ m and f̂ is constant before x1 and after xm (see Figure 4). Let x ∈ I . • If x ≤ x1, because f is increasing and right-continuous, c ≤ f(x) ≤ f(x1) ≤ y1 = c+ . Therefore |f(x)− f̂(x)| = |f(x)− (c+ )| ≤ . • If xk < x ≤ xk+1, we have yk < f(x) ≤ f(xk+1) ≤ yk+1. Further note that yk ≤ f̂(x) ≤ yk+1. Therefore |f(x)− f̂(x)| ≤ yk+1 − yk = . • If xm < x, then ym < f(x) ≤ d. Again, |f(x)− f̂(x)| = |f(x)− ym| ≤ d− ym ≤ . Therefore ‖f − f̂‖∞ ≤ . We can now use Lemma B.1 to end the proof. If the function to approximate has some regularity, the number of neurons needed for approximation can be significantly reduced. In the following lemma, we show that O( − 12 ) neurons are sufficient to approximate a C2 univariate function with a shallow neural network. Lemma B.3. Let f : [a, b] −→ [c, d] ∈ C2, and let > 0. There exists a shallow neural network with ReLU activation, with 1√ 2 min( ∫ √ |f ′′|(1 + µ(f, )), (b − a) √ ‖f ′′‖∞) neurons on a single layer, where µ(f, )→ 1 as → 0, that approximates f within for the infinity norm. Proof. See Appendix B.2.1. We will now use the ideas of Lemma B.2 and Lemma B.3 to approximate a truncated log function, which we will use in the construction of our neural network approximating the product. Corollary B.4. Let > 0 sufficiently small and δ > 0. Consider the truncated logarithm function log : [δ, 1] −→ R. There exists a shallow neural network with ReLU activation, with − 12 log 1δ neurons on a single layer, that approximates f within for the infinity norm. Proof. See Appendix B.2.2. We are now ready to construct a neural network approximating the product function and prove Proposition 3.2. The proof builds upon on the observation that ∏d i=1 xi = exp( ∑d i=1 log xi). We construct an approximating 2-layer neural network where the first layer computes log xi for 1 ≤ i ≤ d, and the second layer computes the exponential. We illustrate the construction of the proof in Figure 2. Proof of Proposition 3.2 Fix > 0. Consider the function h : x ∈ [0, 1] 7→ max(log x, log ) ∈ [log , 0]. Using Corollary B.4, there exists a neural network ĥ : [0, 1] −→ [log , 0] with 1 + dd 12 − 12 log 1 e neurons on a single layer such that ‖h − ĥ ‖∞ ≤ d . Indeed, one can take the -approximation of h : x ∈ [ , 1] 7→ log x ∈ [log , 0], then extend this function to [0, ] with a constant equal to log . The resulting piece-wise affine function has one additional segment corresponding to one additional neuron in the approximating function. Similarly, consider the exponential g : x ∈ R− 7→ ex ∈ [0, 1]. Because g is right-continuous increasing, we can use Lemma B.3 to construct a neural network ĝ : R− −→ [0, 1] with 1 + ⌈ 1√ 2 log 1 ⌉ neurons on a single layer such that ‖g − ĝ ‖∞ ≤ . Indeed, again one can take the -approximation of g : x ∈ [log , 0] 7→ ex ∈ [0, 1], then extend this function to (−∞, log ] with a constant equal to . The corresponding neural network has an additional neuron. We construct our final neural network φ̂ (see Figure 2) as φ̂ = ĝ ( d∑ i=1 ĥ (xi) ) . Note that φ̂ can be represented as a 2-layer neural network: the first layer is composed of the union of the 1+dd 12 − 12 log 1 e neurons composing each of the 1-layer neural networks ĥ i : x ∈ [0, 1]d 7→ ĥ (xi) ∈ R for each dimension i ∈ {1, . . . , d}. The second layer is composed of the 1+ ⌈ 1√ 2 log 1 ⌉ neurons of ĝ . Hence, the constructed neural network φ̂ has O(d 3 2 − 1 2 log 1 ) neurons. Let us now analyze the approximation error. Let x ∈ [0, 1]d. For the sake of brevity, denote ŷ = ∑d i=1 ĥ (xi) and y = ∑d i=1 log(xi). We have, |φ̂ (x)− p(x)| ≤ |φ̂ (x)− exp(ŷ)|+ | exp(ŷ)− exp(y)| ≤ + d∏ i=1 xi · | exp(ŷ − y)− 1|, where we used the fact that |φ̂ (x)− exp(ŷ)| = |ĝ (ŷ)− g(ŷ)| ≤ ‖ĝ − g‖∞ ≤ . First suppose that x ≥ . In this case, for all i ∈ {1, . . . , d} we have |ĥ (xi)− log(xi)| = |ĥ (xi)− h (xi)| ≤ d . Then, |ŷ−y| ≤ . Consequently, |φ̂ (x)−p(x)| ≤ +max(|e −1|, |e− −1|) ≤ 3 , for > 0 sufficiently small. Without loss of generality now suppose x1 ≤ . Then ŷ ≤ h (x1) ≤ log , so by definition of ĝ , we have 0 ≤ φ̂ (x) = ĝ (ŷ) ≤ exp(log ) = . Also, 0 ≤ p(x) ≤ so finally |φ̂ (x)− p(x)| ≤ . Remark B.5. Note that using Lemma B.2 instead of Lemma B.3 to construct approximating shallow networks for log and exp would yield approximation functions ĥ withO( ⌈ d log 1 ⌉ ) neurons and ĝ with O( ⌈ 1 ⌉ ) neurons. Therefore, the corresponding neural network would approximate the product p with O(d2 −1 log 1 ) neurons. B.2 MISSING PROOFS OF SECTION B.1 B.2.1 PROOF OF LEMMA B.3 Proof. Similarly as the proof of Lemma B.2, the goal is to approximate f by a piece-wise affine function f̂ defined on a subdivision x0 = a ≤ x1 ≤ . . . ≤ xm ≤ xm+1 = b such that f and f̂ coincide on x0, . . . , xm+1. We first analyse the error induced by a linear approximation of the function on each piece. Let x ∈ [u, v] for u, v ∈ I . Using the mean value theorem, there exists αx ∈ [u, x] such that f(x) − f(u) = f ′(αx)(x − u) and βx ∈ [x, v] such that f(v) − f(x) = f ′(βx)(v − x). Combining these two equalities, we get, f(x)− f(u)− (x− u)f(v)− f(u) v − u = (v − x)(f(x)− f(u))− (x− u)(f(v)− f(x)) v − u = (x− u)(x− v)f ′(βx)− f ′(αx) v − u = (x− u)(v − x) ∫ βx αx f ′′(t)dt v − u Hence, f(x) = f(u) + (x− u)f(v)− f(u) v − u + (x− u)(v − x) ∫ βx αx f ′′(t)dt v − u . (1) We now apply this result to bound the approximation error on each pieces of the subdivision. Let k ∈ [m]. Recall f̂ is linear on the subdivision [xk, xk+1] and f̂(xk) = f(xk) and f̂(xk+1) = f(xk+1). Hence, for all x ∈ [xk, xk+1], f̂(x) = f(xk) + (x− xk) f(xk+1)−f(xk)xk+1−xk . Using Equation equation 1 with u = xk and v = xk+1, we get, ‖f − f̂‖∞,[xk,xk+1] ≤ sup x∈[xk,xk+1] ∣∣∣∣∣(x− xk)(xk+1 − x) ∫ βx αx f ′′(t)dt xk+1 − xk ∣∣∣∣∣ ≤ 1 2 (xk+1 − xk) ∫ xk+1 xk |f ′′(t)|dt ≤ 1 2 (xk+1 − xk)2‖f ′′‖∞,[xk,xk+1]. Therefore, using a regular subdivision with step √ 2 ‖f ′′‖∞ yields an -approximation of f with⌈ (b−a) √ ‖f ′′‖∞√ 2 ⌉ pieces. We now show that for any µ > 0, there exists an -approximation of f with at most ∫ √ |f ′′|√ 2 (1 + µ) pieces. To do so, we use the fact that the upper Riemann sum for √ f ′′ converges to the integral since √ f ′′ is continuous on [a, b]. First define a partition a = X0 ≤ XK = b of [a, b] such that the upper Riemann sum R( √ f ′′) on this subdivision satisfies R( √ f ′′) ≤ (1 + µ/2) ∫ b a √ f ′′. Now define on each interval Ik of the partition a regular subdivision with step √ 2 ‖f ′′‖Ik as before. Finally, consider the subdivision union of all these subdivisions, and construct the approximation f̂ on this final subdivision. By construction, ‖f − f̂‖∞ ≤ because the inequality holds on each piece of the subdivision. Further, the number of pieces is K−1∑ i=0 1 + (Xi+1 −Xi) sup[Xi,Xi+1] √ f ′′ √ 2 = R( √ f ′′)√ 2 +K ≤ ∫ √ |f ′′|√ 2 (1 + µ), for > 0 small enough. Using Lemma B.1 we can complete the proof. and O( −1 ( log 1 ) 3(d−1) 2 +1) neurons on the second layer. B.2.2 PROOF OF COROLLARY B.4 Proof. In view of Lemma B.3, the goal is to show that we can remove the dependence of µ(f, ) in δ. This essentially comes from the fact that the upper Riemann sum behaves well for approximating log. Consider the subdivision x0 := δ ≤ x1 ≤ . . . ≤ xm ≤ xm+1 := 1 with m = ⌊ 1 ̃ log 1 δ ⌋ where ̃ := log(1 + √ 2 ), such that xk = elog δ+k̃, for k = 0, . . . ,m − 1. Denote f̂ the corresponding piece-wise affine approximation. Similarly to the proof of Lemma B.3, for k = 0, . . . ,m− 1, ‖ log−f̂‖∞,[xk,xk+1] ≤ 1 2 (xk+1 − xk)2‖f ′′‖∞,[xk,xk+1] ≤ (ẽ − 1)2 2 ≤ . The proof follows. B.3 PROOF OF THEOREM 3.1: APPROXIMATING THE KOROBOV SPACE X2,∞(Ω) In this subsection, we prove Theorem 3.1 and show that we can approximate any Korobov function f ∈ X2,∞(Ω) within with a 2-layer neural network ofO( − 12 (log 1 ) 3(d−1) 2 ) neurons. We illustrate the construction in Figure 5. Our proof combines the constructed network approximating the product function and a decomposition of f as a sum of separable functions, i.e. a decomposition of the form f(x) ≈ K∑ k=1 d∏ j=1 φ (k) j (xj), ∀x ∈ [0, 1] d. Consider the sparse grid construction of the approximating space V (1)n using the standard hat function as mother function to create the hierarchical basisWl (introduced in Section 2.2). We recall that the approximation space is defined as V (1)n := ⊕ |l|1≤n+d−1Wl. We will construct a neural network approximating the sparse grid approximation and then use the result of Theorem 2.2 to derive the approximation error. Figure 5 gives an illustration of the construction. Let f (1)n be the projection of f on the subspace V (1) n . f (1) n can be written as f (1)n (x) = ∑ (l,i)∈U(1)n vl,iφl,i(x), where U (1)n contains the indices (l, i) of basis functions present in V (1) n i.e. U (1)n := {(l, i), |l|1 ≤ n+ d− 1, 1 ≤ i ≤ 2l − 1, ij odd for all 1 ≤ j ≤ d}. (2) Throughout the proof, we explicitly construct a neural network that uses this decomposition to approximate f (1)n . We then use Theorem 2.2 and choose n carefully such that f (1) n approximates f within for L∞ norm. Note that the basis functions can be written as a product of univariate functions φl,i = ∏d j=1 φlj ,ij . We can therefore use the product approximation of Proposition 3.2 to approximate the basis functions. Specifically, we will use one layer to approximate the terms log φlj ,ij and a second layer to approximate the exponential. We now present in detail the construction of the first layer. First, recall that φlj ,ij is a piece-wise affine function with subdivision 0 ≤ ij−1 2lj ≤ ij 2lj ≤ ij+1 2lj ≤ 1. Define the error term ̃ := 2|f |2,∞ . We consider a symmetric subdivision of the interval [ ij−1+̃ 2lj , ij+1−̃ 2lj ] . We define it as follows: x0 = ij−1+̃ 2lj ≤ x1 ≤ · · · ≤ xm+1 = ij2lj ≤ xm+2 ≤ · · · ≤ x2m+2 = ij+1−̃ 2lj where m =⌊ 1 0 log 1̃ ⌋ and 0 := log(1 + √ 2̃/d), such that xk = ij − 1 + elog ̃+k 0 2lj 0 ≤ k ≤ m, xk = ij + 1− elog ̃+(2m+2−k)k 0 2lj m+ 2 ≤ k ≤ 2m+ 2. Note that with this definition, the terms log(2ljxk − ij) form a regular sequence with step 0. We now construct the piece-wise affine function ĝlj ,ij on the subdivision x0 ≤ · · · ≤ x2m+2 which coincides with log φlj ,ij on x0, · · · , x2m+2 and is constant on [0, x0] and [x2m+2, 1]. By Lemma B.1, this function can be represented by a 1-layer neural network with as much neurons as the number of pieces of ĝ, i.e. at most 2 √ 3d ̃ log 1 ̃ neurons for sufficiently small. A similar proof to that of Corol- lary B.4 shows that ĝ approximates max(log φlj ,ij , log(̃/3)) within ̃/(3d) for the infinity norm. We use this construction to compute in parallel, ̃/(3d)-approximations of max(log φlj ,ij (xj), log ̃) for all 1 ≤ j ≤ d, and 1 ≤ lj ≤ n, 1 ≤ ij ≤ 2lj where ij is odd. These are exactly the 1-dimensional functions that we will need, in order to compute the d-dimensional function basis of the approximation space V (1)n . There are d(2n − 1) such univariate functions, therefore our first layer contains at most 2n+1d √ 3d ̃ log 1 ̃ neurons. We now turn to the second layer. The result of the first two layers will be ̃/3-approximations of φl,i for all (l, i) ∈ U (1)n . Recall that U (1)n contains the indices for the functions forming a basis of the approximation space V (1)n . To do so, for each indexes (l, i) ∈ U (1)n we construct a 1-layer neural network approximating the function exp, which will compute an approximation of exp(ĝl1,i1 +· · ·+ ĝld,id). The approximation of exp is constructed in the same way as for Lemma B.3. Consider a regular subdivision of the interval [log(̃/3), 0] with step √ 2(̃/3), i.e. x0 := log(̃/3) ≤ x1 ≤ · · · ≤ xm ≤ xm+1 = 0 where m = ⌊√ 3 2̃ log 3 ̃ ⌋ , such that xk = log ̃ + k √ 2̃, 0 ≤ k ≤ m. Construct the piece-wise affine function ĥ on the subdivision x0 ≤ · · · ≤ xm+1 which coincides with exp on x0, · · · , xm+1 and is constant on (−∞, x0]. Lemma B.3 shows that ĥ approximates exp on R− within ̃ for the infinity norm. Again, Lemma B.1 gives a representation of ĥ as a 1-layer neural network with as many neurons as pieces in ĥ i.e. 1 + ⌈√ 3 2̃ log 3 ̃ ⌉ . The second layer is the union of 1-layer neural networks approximating exp within ̃/3, for each indexes (l, i) ∈ U (1)n . Therefore, the second layer contains ∣∣∣U (1)n ∣∣∣ (1 + ⌈√ 32̃ log 3̃⌉) neurons. As shown in Bungartz & Griebel (2004),∣∣∣U (1)n ∣∣∣ = n−1∑ i=0 2i· ( d− 1 + i d− 1 ) = (−1)d+2n d−1∑ i=0 ( n+ d− 1 i ) (−2)d−1−i = 2n· ( nd−1 (d− 1)! +O(nd−2) ) . Therefore, the second layer hasO ( 2n n d−1 (d−1)! ̃ − 12 log 1̃ ) neurons. Finally, the output layer computes the weighted sum of the basis functions to approximate f (1)n . Denote by f̂ (1) n the corresponding function of the constructed neural network (see Figure 5), i.e. f̂ (1)n = ∑ (l,i)∈U(1)n vl,i · ĥ d∑ j=1 ĝ(xj) . Let us analyze the approximation error of our neural network. The proof of Proposition 3.2 shows that the output of the two first layers h (∑d j=1 ĝ(·j) ) approximates φl,i within ̃. Therefore, we obtain ‖f (1)n − f̂ (1)n ‖∞ ≤ ̃ ∑ (l,i)∈U(1)n |vl,i|. We now use approximation bounds from Theorem 2.2 on f (1)n . ‖f−f̂ (1)n ‖∞ ≤ ‖f−f (1)n ‖∞+‖f (1)n −f̂ (1)n ‖∞ ≤ 2 · |f |2,∞ 8d ·2−2n ·A(d, n)+ 2|f |2,∞ ∑ (l,i)∈U(1)n |vl,i|, where ∑ (l,i)∈U(1)n |vl,i| ≤ |f |2,∞2−d ∑ i≥0 2−i · ( d− 1 + i d− 1 ) ≤ |f |2,∞. Let us now take n = min { n : 2|f |2,∞ 8d 2−2nA(d, n) ≤ 2 } . Then, using the above inequality shows that the neural network f̂ (1)n approximates f within for the infinity norm. We will now estimate the number of neurons in each layer of this network. Note that n ∼ 1 2 log 2 log 1 and 2n ≤ 4 8 d 2 (2 log 2) d−1 2 (d− 1)! 12 √ |f |2,∞ ( log 1 ) d−1 2 · (1 +O(1)). (3) We can use the above estimates to show that the constructed neural network has at most N1 (resp. N2) neurons on the first (resp. second) layer where N1 ∼ →0 8 √ 6d2 8 d 2 (2 log 2) d−1 2 d! 1 2 · |f |2,∞ ( log 1 ) d+1 2 , N2 ∼ →0 4 √ 3d 3 2 8d/2(2 log 2) 3(d−1) 2 d! 3 2 · |f |2,∞ ( log 1 ) 3(d−1) 2 +1 . This proves the bound the number of neurons. Finally, to prove the bound on the number of training parameters of the network, notice that the only parameters of the network that depend on the function f are the parameters corresponding to the weighs vl,i of the sparse grid decomposition. This number is |U (1)n | = O(2n nd−1 ) = O( − 1 2 (log 1 ) 3(d−1) 2 ). B.4 PROOF OF THEOREM 3.4: GENERALIZATION TO GENERAL ACTIVATION FUNCTIONS We start by formalizing the intuition that a sigmoid-like (resp. ReLU-like) function is a function that resembles the Heaviside (resp. ReLU) function by zooming out along the x (resp. x and y) axis. Lemma B.6. Let σ be a sigmoid-like activation with limit a (resp. b) in −∞ (resp.+∞). For any δ > 0 and error tolerance > 0, there exists a scaling M > 0 such that x 7→ σ(Mx)b−a − a approximates the Heaviside function within outside of (−δ, δ) for the infinity norm. Furthermore, this function has values in [0, 1]. Let σ be a ReLU-like activation with asymptote b · x+ c in +∞. For any δ > 0 and error tolerance > 0, there exists a scaling M > 0 such that x 7→ σ(Mx)Mb approximates the ReLU function within for the infinity norm. Proof. Let δ, > 0 and σ a sigmoid-like activation with limit a (resp. b) in −∞ (resp. +∞). There exists x0 > 0 sufficiently large such that (b−a)|σ(x)−a| ≤ for x ≤ −x0 and (b−a)|σ(x)−b| ≤ for x ≥ x0. It now suffices to take M := x0/δ to obtain the desired result. Now let σ be a ReLU-like activation with oblique asymptote bx in +∞ where b > 0. Let M such that |σ| ≤ Mb for x ≤ 0 and |σ(x) − bx| ≤ Mb for x ≥ 0. One can check that |σ(Mx)Mb | ≤ for x ≤ 0, and |σ(Mx)Mb − x| ≤ for x ≥ 0. Using this approximation, we reduce the analysis of sigmoid-like (resp. ReLU-like) activations to the case of a Heaviside (resp ReLU) activation to prove the desired theorem. Proof of Theorem 3.4 We start by the class of ReLU-like activations. Let σ be a ReLU-like activation function. Lemma B.6 shows that one can approximate arbitrarily well the ReLU activation with a linear map σ. Take the neural network approximator f̂ of a target function f given by Theorem 3.1. At each node, we can add the linear map corresponding to x 7→ σ(Mx)Mb with no additional neuron nor parameter. Because the approximation is continuous, we can take M > 0 arbitrarily large in order to approximate f̂ with arbitrary precision on the compact [0, 1]d. The same argument holds for sigmoid-like activation functions in order to reduce the problem to Heaviside activation functions. Although quadratic approximations for univariate functions similar to Lemma B.3 are not valid for general sigmoid-like activations – in particular the Heaviside — we can obtain an analog to Lemma B.2 as Lemma B.7 given in the Appendix B.4.1. This results is an increased number of neurons. In order to approximate a target function f ∈ X2,∞(Ω), we use the same structure as the neural network constructed for ReLU activations and use the same notations as in the proof of Theorem 3.1. The first difference lies in the approximation of log φlj ,ij in the first layer. Instead of using Corollary B.4, we use Lemma B.7. Therefore, 12d̃ log 3 ̃ neurons are needed to compute a ̃/(3d)-approximation of max(log φlj ,ij , log(̃/3)). The second difference is in the approximation of the exponential in the second layer. Again, we use Lemma B.7 to construct a ̃/3-approximation of the exponential on R− with 6̃ neurons for the second layer. As a result, the first layer contains at most 2n+2 3d 2 ̃ log 1 ̃ neurons for sufficiently small, and the second layer contains ∣∣∣U (1)n ∣∣∣ 6̃ neurons. Using the same estimates as in the proof of Theorem 3.1 shows that the constructed neural network has at mostN1 (resp. N2) neurons on the first (resp. second) layer where N1 ∼ →0 3 · 25 · d5/2 8 d 2 (2 log 2) d−1 2 d! 1 2 |f | 3 2 2,∞ 3 2 ( log 1 ) d+1 2 , N2 ∼ →0 24 · d 32 8 d 2 (2 log 2) 3(d−1) 2 d! 3 2 · |f | 3 2 2,∞ 3 2 ( log 1 ) 3(d−1) 2 . This ends the proof. B.4.1 PROOF OF LEMMA B.7 Lemma B.7. Let σ be a sigmoid-like activation. Let f : I −→ [c, d] be a right-continuous increasing function where I is an interval, and let > 0. There exists a shallow neural network with activation σ, with at most 2d−c neurons on a single layer, that approximates f within for the infinity norm. Proof. The proof is analog to that of Lemma B.2. Let m = bd−c c. We define a regular subdivision of the image interval c ≤ y1 ≤ . . . ≤ ym ≤ d where yk = c + k for k = 1, . . . ,m, then using the monotony of f , we can define a subdivision of I , x1 ≤ . . . ≤ xm such that xk := sup{x ∈ I, f(x) ≤ yk}. Let us first construct an approximation neural network f̂ with the Heaviside activation. Consider f̂(x) := y1 + m−1∑ i=1 1 ( x− xi + xi+1 2 ≥ 0 ) . Let x ∈ [c, d] and k such that x ∈ [xk, xk+1]. We have by monotony yk ≤ f(x) ≤ yk+1 and yk = y1 + (k − 1) ≤ f̂(x) ≤ y1 + k = yk+1. Hence, f̂ approximates f within in infinity norm. Let δ < mini=1,...,m(xi+1−xi)/4 and σ a general sigmoid-like activation with limits a in−∞ and b in +∞. Take M given by Lemma B.7 such that σ(Mx)b−a − a approximates the Heaviside function within 1/m outside of (−δ, δ) and has values in [0, 1]. Using the same arguments as above, the function f̂(x) := y1 + m−1∑ i=1 σ ( Mx−M xi+xi+12 ) b− a − a approximates f within 2 for the infinity norm. The proof follows. C PROOFS OF SECTION 4 C.1 PROOF OF THEOREM 4.1: APPROXIMATING KOROBOV FUNCTIONS WITH DEEP NEURAL NETWORKS Let > 0. We construct a similar structure to the network defined in Theorem 3.1 by using the sparse grid approximation of Subsection 2.2. For a given n, let f (1)n be the projection of f in the approximation space V (1)n (defined in Subsection 2.2) and U (1) n (defined in equation 2) the set of indices (l, i) of basis functions present in V (1)n . Recall f (1) n can be uniquely decomposed as f (1)n (x) = ∑ (l,i)∈U(1)n vl,iφl,i(x). where φl,i = ∏d j=1 φlj ,ij are the basis functions defined in Subsection 2.2. In the first layer, we compute exactly the piece-wise linear hat functions φlj ,ij , then in the next set of layers, we use the product-approximating neural network given by Proposition 4.2 to compute the basis functions φl,i = ∏d j=1 φlj ,ij (see Figure 3). The output layer computes the weighted sum∑ (l,i)∈U(1)n vl,iφl,i(x) and outputs f (1) n . Because the approximation has arbitrary precision, we can chose the network of Proposition 4.2 such that the resulting network f̂ verifies ‖f̂ − f (1)n ‖∞ ≤ /2. More precisely, as φlj ,ij is piece-wise linear with four pieces, we can compute it exactly with four neurons with ReLU activation on a single layer (Lemma B.1). Our first layer is composed of the union of all these ReLU neurons, for the d(2n − 1) indices lj , ij such that 1 ≤ j ≤ d, 1 ≤ lj ≤ n, 1 ≤ ij ≤ 2lj and ij is odd. Therefore, it contains at most d2n+2 neurons with ReLU activation. The second set of layers is composed of the union of product-approximating neural networks to compute φl,i for all (l, i) ∈ U (1)n . This set of layers contains dlog2 de layers with activation σ and at most |U (1)n | · 8d neurons. The output of these two sets of layers is an approximation of the basis functions φl,i with arbitrary precision. Consequently, the final output of the complete neural network is an approximation of f (1)n with arbitrary precision. Similarly to the proof of Theorem 3.1, we can chose the smallest n such that ‖f − f (1)n ‖∞ ≤ /2 (see equation 3 for details). Finally, the network has depth at most log2 d+ 2 and N neurons where N = 8d|U (1)n | ∼ →0 25 · d5/2 8 d 2 (2 log 2) 3(d−1) 2 d! 3 2 · √ |f |2,∞ ( log 1 ) 3(d−1) 2 . The parameters of the network depending on the function are exactly the coefficients vl,i of the sparse grid approximation. Hence, the network has O( − 12 (log 1 ) 3(d−1) 2 ) training parameters. D PROOFS OF SECTION 5 D.1 PROOF OF THEOREM 5.2: NEAR-OPTIMALITY OF NEURAL NETWORKS FOR KOROBOV FUNCTIONS Our goal is to define an appropriate subspace XN+1 in order to get a good lower bound on the Bernstein width bN (K)X , defined in equation 5, which in turn provides a lower bound on the approximation error (Theorem 5.1). To do so, we introduce the Deslaurier-Dubuc interpolet φ
1. What is the focus of the paper regarding neural network approximation capabilities? 2. What are the strengths and weaknesses of the proposed approach, particularly in representing Korobov functions? 3. Do you have any concerns regarding the presentation and motivation of the paper? 4. How does the paper contribute to understanding the practical success of neural networks theoretically? 5. What is the significance of studying Korobov functions in the context of machine learning?
Summary Of The Paper Review
Summary Of The Paper This paper studies approximation capabilities of neural networks for the purpose of approximating Korobov functions which are multivariate functions of bounded second mixed derivatives. The paper presents a complete study for approximating such functions with NNs: they study shallow nets and show that 2 layers with ReLUs and total #neurons of O(1/eps log^{1.5d}(1/eps)) where d is the dimension, can \eps-approximate Korobov functions. Moreover, by allowing larger depths, close to logd, they can get a better depedence on \eps. Finally, they prove that any continuous function approximator requires a #params close to their upper bound in order to approximation Korobov functions. This gives a complete picture for how Korobov functions behave wrt to function approximation with shallow or deep nets. Review Strengths: -in-depth study of Korobov function approximations -several new ideas/gadgets on how to represent such functions with NNs Weaknesses: -unclear presentation in several places: how do the bounds break the curse of dimensionality as stated in bullet point 1? -lack of motivation: Korobov functions seem like an interesting object for study, however this falls outside the community's "standard" knowledge, so the reviewer would expect much better explanation for why one should care for these functions in the context of Machine Learning or so. For example, the Conclusion states: "This work therefore contributes to understanding the practical success of neural networks theoretically" --> How is this the conclusion? Are Korobov functions prevalent in practice? The bounds in Th. 4.1 also seem to suggest that increasing depth from 2 to logd only shaves an sqrt{\eps} factor which does not seem so significant, even though the depth increased substantially. Q: Why do the authors claim that they break the curse of dimensionality (e.g., page 2 bullet point 1?)? There is a strong dependence on d, right? What am I missing?
ICLR
Title Effects of Linguistic Labels on Learned Visual Representations in Convolutional Neural Networks: Labels matter! Abstract We investigated how the visual representations learned by CNNs is affected by training using different linguistic labels (e.g., basic-level labels only, superordinate-level only, or both at the same time), and how these differentlytrained models compare in their ability to predict the behavior of humans tasked with selecting the object that is most different from two others in a triplet. CNNs used identical architectures and inputs, differing only with respect to the labels used to supervise the training. In the absence of labels, we found that models learned very little categorical structure, suggesting that this structure cannot be extracted purely from the visual input. Surprisingly, models trained with superordinate labels (vehicle, tool, etc.) were most predictive of the behavioral similarity judgments. We conclude that the representations used in an odd-one-out task are highly modulated by semantic information, especially at the superordinate level. 1 INTRODUCTION A critical distinction between human category learning and machine category learning is that only humans have a language. A language means that human learning is not limited to a one-to-one correspondence between a visual input and a category label. Indeed, the users of a language are known to actively seek out categorical relationships between objects and use these relationships in making perceptual similarity judgments and in controlling behavior (Hays, 2000; Lupyan & Lewis, 2017). A premise of our work is that a language provides a semantic structure to labels, and that this structure contributes to the superior efficiency and flexibility of human vision compared to any artificial systems (Pinto et al., 2010). Of course, the computer vision literature on zero-shot and fewshot learning has also made good progress in leveraging semantic information (e.g., image captions, attribute labels, relational information) to increase the generalizability of a model’s performance (Lampert et al., 2013; Sung et al., 2018; Lei Ba et al., 2015). Still, this performance pales in comparison to the human ability for classification, where zero-shot and few-shot learning is the norm, and efficiently-acquired category knowledge is easily generalized to new exemplars (Ashby & Maddox, 2005; Ashby & Ell, 2001). One reason why machine learning lags behind human performance may be because of a failure to fully consider the semantic structure of the ground-truth labels used for training, which can be heavily biased by basic or subordinatelevel categories. This might result in models learning visual feature representations that may not be best for generalization to new, higher-level categories. For example, ImageNet (Deng et al., 2009) contains 120 different dog categories, making the models that are trained using these labels dog experts, creating an interesting but highly atypical semantic structure. Here we study how the linguistic structure of labels influences what is learned by models trained on the same visual inputs. Specifically, we manipulated the labels used to supervise the training of CNN models, each having the same architecture and given identical visual inputs. For example, some of these models were trained with basic-level labels only, some with only superordinate-level labels, and some with both. We then compare visual representations learned by these models, and predict human similarity judgement that we collected using an Odd-one-out task where people had to select which of three object images was the most different. With this dataset, and using categorical representations extracted from our trained models, we could predict human similarity decisions with up to 74% accuracy, which gives us some understanding of the labels needed to produce human-like representations. Our study also broadly benefits both computer vision and behavioral science (e.g., psychology, neuroscience) by suggesting that the semantic structure of labels and datasets should be carefully constructed if the goal is to build vision models that learn visual features representations having the potential for human-like generalization. For behavioral science, this research provides a useful computational framework for understanding the effect of training labels on the human learning of category relationships in the context of thousands of naturalistic images of objects. 2 RELATED WORK NEW 2.1 SEMANTIC LABEL EMBEDDING Although many computer vision models perform well in image classification, generalization tasks such as zero-shot and few-show learning remain challenging. Several studies have attempted to address this problem by embedding semantic information into a model’s representations using text description (Lei Ba et al., 2015), attribute properties (Lampert et al., 2013; Akata et al., 2015; Chen et al., 2018), and relationships between objects (Sung et al., 2018; Annadani & Biswas, 2018). More related to our work, some studies even directly leveraged the linguistic structure of labels. For example, Lei et al. (2017) and Wang & Cottrell (2015) found that training CNNs with coarse-grained labels (e.g., basic-level categories) improve classification accuracy for finer-grained labels (e.g., subordinate-level labels). Also, Frome et al. (2013) re-trained a CNN to predict the word vectors learned by a word embedding model, instead of using one-hot labels, and found improved zeroshot predictions; the model was able to predict thousands of novel categories that were never seen with 18% accuracy. These results suggest that different semantic structures of labels, such as word hierarchy, an order of learning, or semantic similarity between words, affect learned visual representations in CNNs to differing degrees. The current study provides a more systematic investigation of this question. 2.2 UNDERSTANDING HUMAN VISUAL REPRESENTATION The human visual system is unparalleled in its ability to learn feature representations for objects that are robust to large changes in appearance. This tolerance to variability, not only enables accurate object recognition, but also facilitates generalization to new exemplars and categories(DiCarlo et al., 2012). Understanding how humans learn these visual representations is, therefore, an enormously important question, but one that is difficult to study because human learning in the real world is affected and confounded by many factors that are difficult to control experimentally. Recently, work has addressed this issue by computationally modeling and simulating human representation. For example, Hebart et al. (2019) studied human visual representations by fitting probabilistic models to human similarity judgement, and found that human visual representations are composed of semantically interpretable units, with each conveying categorical membership, functionality, and perceptual attributes. Peterson et al. (2018), the study most similar to ours, trained CNNs with labels that differed in hierarchy (e.g., subordinate-level vs. basic-level). They found that training on coarsergrained labels (either as standalone or as coming after finer-grained) induces a more semantically structured representation, and produces more human-like generalization performance. The current study builds on this earlier work by 1) including CNNs trained with no labels (autoencoder) or very fine-grained labels (word vector), 2) testing on a large-scale dataset of human similarity judgement, and 3) comparing superordinate vs. basic levels. 3 MODEL TRAINING Our goal is to study how linguistic labels change the visual representations learned by CNNs. To do this, we trained equivalently designed CNNs for classification, but each with different linguistic labels as ground-truth. In addition, we trained a Convolutional autoencoder, which encodes the images using the same convolutional structure as the other models but, instead of being supervised to predict the class of the image, the objective of this model is to generate an output image that is the same as the input. This Conv. Autoencoder, therefore, represents a model that was not trained with any linguistic label, in contrast to the other models that were each trained with some type of linguistic labels. The description of each model and the labels used for training are provided below. • Conv. Autoencoder: Autoencoder with Convolutional encoder and decoder trained to output the same image as input • Basic labels: CNN model trained with one-hot encoding of basic-level categories, n=30 • Superordinate labels: CNN model trained with one-hot encoding of superordinate-level categories, n=10 • Basic + Superordinate: CNN model trained with two-hot encoding of both basic and superordinate-level categories, n=40(10+30) • Basic then Superordinate: CNN model trained with one-hot encoding of basic-level cat- egories first (n=30), and then finetuned with one-hot encoding of superordinate categories (n=10) • Superordinate then Basic: CNN model trained with one-hot encoding of superordinatelevel categories first (n=10), and then finetuned with one-hot encoding of basic categories (n=30) NEW • Basic FastText vectors: CNN model trained with basic-level word vectors extracted from FastText word embedding model (Bojanowski et al., 2017), dimension=300 • Superordinate FastText vectors: CNN model trained with superordinate-level word vectors extracted from FastText word embedding model (Bojanowski et al., 2017), dimension=300 NEW The identical CNN architecture was used for each model in our labeling manipulation, except for the output layer and its activation function. This general pipeline is described in Figure 1. Our CNN models consist of five blocks of two Convolutional layers, each followed by Max pooling and Batch normalization layers. For all Convolutional and Max pooling operations, zero padding was used to produce output feature maps having the same size as the input. Rectified linear units (ReLU) were used to obtain an activation function after each convolution. The flattened output of the final Convolutional layer, the ”bottleneck” feature that we later extract and use as a model’s visual representation (dim=1568), was then fed into one fully connected dense layer. For Conv. Autoencoder, the same Convolutional architecture was used for encoding and decoding, with the hidden layer in the model (dim=1568) serving as the bottleneck feature for analysis. The final predicted output, ”label vector” is either one-hot or word embedding according to the model’s target labels. Output activation functions differed depending on what label vector was used: a sigmoid function for Basic + Superordinate CNN, a linear function for the Conv. Autoencoder and FastText vectors CNNs, and a softmax for the rest of CNNs. All models were trained and validated on the images of 30 categories from the IMAGENET 2012 dataset (Deng et al., 2009), and tested on images of the same 30 categories from the THINGS dataset (Hebart et al., 2019). These 30 basic-level categories were grouped into 10 higher-level, superordinate categories, which included: ’mammal’, ’bird’, ’insect’, ’fruit’, ’vegetable’, ’vehicle’, ’container’, ’kitchen appliance’, ’musical instrument’, and ’tool’. A list of all 30 categories, with their superordinates, are provided in the Supplementary 7.1. All input images were converted from RGB to BGR and each channel was zero-centered with respect to the ImageNet images. Different loss functions were used for training different models: Binary Crossentropy loss for Basic + Superordinate CNN, and Mean Squared Error loss for both Conv. Autoencoder and FastText vectors CNNs, and Categorical Crossentropy loss was used for the rest of the CNNs. All models were trained using Adam optimization (Kingma & Ba, 2014), with a mini-batch size of 64. During training, early stopping was implemented and the model with the lowest validation loss was used for the following analysis. 4 BEHAVIORAL DATA To compare the visual representations learned by our trained models with those of humans, we collected human similarity judgments in an Odd-one-out task, as in Zheng et al. (2019). Participants were shown three images of objects per trial, a triplet, and were asked to choose which object was most different from the other two. Each triplet consisted of three exemplar objects from the 30 categories used for our model training. All exemplar objects came from Zheng et al. (2019), except for ’crate’, ’hammer’, ’harmonica’, and ’screwdriver’, which were replaced with new exemplars to increase image quality and category representativeness. There are 4060 possible triplets that can be generated from all 30 categories, but we collected behavioral data on only a subset of these to reduce the time and cost of data collection. This subset includes 1) the ten triplets having objects coming from the same superordinate category, e.g., ’orangutan’, ’lion’, ’gazelle’ 2) all 435 triplets where two objects came from the same superordinate category, e.g., ’orangutan’, ’lion’, ’minivan’, and 3) 1375 triplets where all objects came from different categories, e.g., ’orangutan’, ’minivan’, ’lemon’, yielding 1820 unique triplets in total. 51 Amazon Mechanical Turk (AMT) workers participated in this task, each making responses on ∼200 triplets. After removing responses with reaction times below 500ms, we collected 9697 similarity judgments where each triplet was viewed by 5.6 workers, on average (min=4, max=51). 5 EXPERIMENTS 5.1 EVALUATING MODEL PERFORMANCE Although our goal was not to compete with state-of-the-art vision models in classification, we evaluated classification accuracy to see the effects of different labels on learning, thereby confirming that the visual features learned by our models represented category knowledge. To evaluate classification accuracy, we report top@k, the percentage of accurately classified test images where the true class was in the model’s the top K predictions in Table 1. Average precision and average recall over all categories are also reported in the Supplementary 7.3. All metrics were computed on the THINGS test dataset (Hebart et al., 2019). Because the FastText vectors CNN predicts a word vector, not a class, we approximated its classification performance by calculating cosine similarity between predicted and true word vectors and choosing the corresponding class from top@k similarities. Classification results cannot be generated from Conv. Autoencoder, but we include examples of images generated from this model in the Supplementary 7.2 to show that the model worked. As can be seen in Table 1, the top@5 classification accuracy for all trained models was good (all >.82), although there is room for improved classification for FastText vectors CNN. 5.2 EXPLORING VISUAL REPRESENTATIONS To explore how the different linguistic labeling schemes affected the learned visual representations, we extracted and analyzed the bottleneck features from each model (i.e., the 1568-dimensional output of the last Convolutional layer; see Figure 1). We first measured the representational similarity of all objects in the training dataset (IMAGENET 2012; Deng et al., 2009) both between and within each category. These representational distributions were visualized using t-SNE (Maaten & Hinton, 2008) and are attached in Supplementary 7.5. We also analyzed the similarity between categorical FIX representations by plotting a similarity matrix in Figure 2. To create categorical representations, we simply averaged the obtained bottleneck features from all training images per category, creating in a sense ”prototypical” representation for each class. Clustering Quality To investigate how model’s category representations are dense and well separated, we computed the ratio of between-category dispersion and within-category dispersion using cosine distance (1-cosine angle of two feature vectors). Between-category dispersion is the average cosine distance between the center(mean) of different categories. Within-category dispersion is the average cosine distance between every exemplar and the center of each category. Comparing the models in Table 2 revealed that using distributed word vectors as targets, especially Superordinate FastText vectors, produced the highest between-to-within ratio, suggesting the most tightly clustered representations. Interestingly, the Basic + Superordinate CNN model, which was trained with both basic and superordinate labels at the same time, learned more scattered and less distinguishable categorical representations compared to other label-trained models. Lastly, Conv. Autoencoder produced the lowest between-towithin ratio, suggesting that even if a model learns visual features that are good enough to generate input-like images, these visual representations may still be poorly discriminable not only in basic level categories, but also in superordinate level categories. Widely distributed features of Conv. Autoencoder from T-SNE plots in Supplementary 7.5 further supported that the visual input alone is not sufficient to produce any clusterable structure or category representations. A similar trend was observed in the other clustering quality measures as reported in the Supplementary 7.4. Visualization of Categorical Representations Figure 2 visualizes cosine similarity matrices for the category representations learned by the models to explore whether the hierarchical semantic structure of the 30 categories is captured (e.g., every basic-level category belongs to one of ten superordinate categories). For a complete comparison, we also analyzed categorical representations extracted from SPoSE (Zheng et al., 2019), FastText (Bojanowski et al., 2017), and VGG16 early layer (i.e., the output from the first max-pooling layer; Simonyan & Zisserman, 2014). SPoSE model’s category representations were trained on human similarity judgments. This serves as an approximation of human perceived similarity, which can be a combination of semantic and visual similarities. While FastText similarity represents the semantic similarity between categories in basic-level terms, VGG16 early layer similarity represents lowerlevel visual similarity. Whereas little effect of category hierarchy can be seen in VGG16 early layer or Conv. Autoencoder features, the various semantic structure can be observed in the other models (e.g., the emergent bright yellow squares in the figure). Upon closer analysis, these categorical divisions seemed to occur for 1) nature vs. non-nature, 2) edible vs non-edible, and 3) the superordinate categories. Surprisingly, basic-level structures are still observed in Figure 2f (e.g., fine-grained lines in the diagonal), where the model is trained only on the superordinate-level labels. This suggests that guidance from superordinate labels was often as good or better as guidance from much finergrained basic-level labels, which is consistent with the previous finding that training with coarser labels induce more hierarchical structure in visual representations (Peterson et al., 2018) 5.3 PREDICTING HUMAN VISUAL BEHAVIOR Finally, we evaluated how well the visual representations learned by the models could predict human similarity judgement in an Odd-one-out task (See Section 4). For each triplet, responses were generated from the models by comparing the cosine similarities between the three visual object representations and selecting the one most dissimilar from the other two. Three kinds of visual representations were computed and compared: 1) IMAGENET categorical representations, where features were averaged over ∼1000 images per category from the IMAGENET training dataset (Deng et al., 2009), 2) THINGS categorical representations, where features were averaged over ∼10 images per category from the THINGS dataset (Hebart et al., 2019), and 3) Single Exemplar representation, where only one feature per category was generated for the 30 exemplar images used in the behavioral data collection. Together with accuracy from SPoSE (Zheng et al., 2019), FastText (Bojanowski et al., 2017), and VGG16 early layer (Simonyan & Zisserman, 2014), three baseline models of accuracy are reported below, which constitute upper and lower bounds. • Null Acc: Accuracy achieved by predicting that every sample is the most frequent class in the dataset (lower bound, 36%). • Bayes Acc: Accuracy achieved by predicting that every sample is the most frequent class in each unique triplet set (upper bound, 84%). • SPoSE Acc: Accuracy achieved using the SPoSE model (Zheng et al., 2019), a probabilistic model that is directly trained on human responses on all triplets from 1854 THINGS objects (80%). As shown in Figure 3, triplet prediction accuracy was highest when models used IMAGENET category representations and lowest when single exemplar representations were used, even if exemplar image is the one that participant actually saw during the experiment. This shows that when humans do visual similarity ratings, they not only evaluate visual inputs but also use rich and abstract semantic information learned from viewing myriad exemplars. Comparing individual model performance, the highest accuracy (74%) was obtained by the model trained with superordinate labels. This performance is particularly impressive, considering 1) how coarsely grained superordinate labels are (dim=10) compared to Basic labels (dim=30), Basic + Superordinate labels (dim=40), or FastText vectors (dim=300), and 2) that this model is not trained on the actual human triplet data, as was the case for the SPoSE model whose performance was about 80%. These results suggest that the representations used by humans in an Odd-one-out task are highly semantic, reflecting category structure, especially at the superordinate level. However, this may be only because the setting of odd-one-out task has caused people to use superordinate label information. For example, when the participants are given a triplet like (’orangutan’, ’lion’, and ’lemon’), they are prone to choose ’lemon’ because it is the most odd one in superordinate-level. In fact, when the number of superordinate categories in a triplet is two as in the example above, 90% of human responses can be predicted just by telling which one is the odd superordinate category. To investigate how much this task setting would affect the results, we broke down the triplet data based on the number of superordinate categories that a triplet belongs to and reported prediction performance for each split, as shown in the Figure 3. Interestingly, the model trained with superordinate labels alone still performed the best (63%) when superordinate-level information was not very helpful, where all three images in a triplet come from three different superordinate categories, e.g, (’mammal’,’fruit’,’vehicle’). Moreover, the superordinate labels CNN (59%) outperformed the basic labels CNN (56%) even when the images were to be compared at the basic level, where all three images in a triplet come from the same superordinate categories, e.g., (’lemon’,’orange’,’banana’). This implies humans leverage the guidance from coarser superordinate labels in shaping categorical visual representation in both basic and superordinate levels 6 CONCLUSION To be able to generalize to unseen exemplars, any vision system has to learn statistical regularities that make members of the same category more similar to one another than members of other categories. But where do these regularities come from? Are they present in the bottom-up (visual) input to the network? Or does learning the regularities require top-down guidance from category labels? If so, what kinds of labels? To investigate this problem, we manipulated the visual representations learned by CNNs by supervising them using different types of labels and then evaluated these models in their ability to predict human similarity judgments. We found that the type of label used during training profoundly affected the visual representations that were learned, suggesting that there is categorical structure that is not present in the visual input and instead requires top-down guidance in the form of category labels. We also found guidance from superordinate labels was often as good or better as guidance from much finer-grained basic-level labels. Models trained only on superordinate class labels such as ”musical instrument” and ”container” were not only more sensitive to these broader classes than models trained on just basic-level labels, but exposure to just superordinate labels allowed the model to learn within-class structure, distinguishing a harmonica from a flute, and a screwdriver from a hammer. This finding is consistent with the previous work that revealed that training with coarser labels induce more semantically structured visual representations (Peterson et al., 2018). More surprisingly, models supervised using superordinate labels (vehicle, tool, etc.) were best in predicting human performance on a triplet odd-one-out task. CNNs trained with superordinate labels not only outperformed other models when the odd-one-out came from a different superordinate category (which is not surprising), but also when all three objects from a triplet came from different superordinate categories (e.g., when choosing between a banana, a bee, and a screwdriver). Our ongoing work into how different types of labels shape visual representations is exploring the effect of labels specific to different languages (e.g., English vs. Mandarin), and how these may translate to differential human and CNN classification performance. ACKNOWLEDGMENTS Details regarding research support will be added post-review. 7 SUPPLEMENTARY MATERIAL 7.1 LIST OF 30 CATEGORIES Superordinate-level Category Basic-level Category Wordnet ID Mammal Orangutan n02480495 Gazelle n02423022 Lion n02129165 Insect Ant n02219486 Bee n02206856 Grasshopper n02226429 Bird Hummingbird n01833805 Goose n01855672 Vulture n01616318 Vegetable Artichoke n07718747 Cucumber n07718472 Zucchini n07716358 Fruit Orange n07747607 Lemon n07749582 Banana n07753592 Tool Hammer n03481172 Screwdriver n04154565 Shovel n04208210 Vehicle Minivan n03770679 Trolley n04335435 Taxi n02930766 Musical Instrument Drum n03249569 Flute n03372029 Harmonica n03494278 Kitchen Appliance Refrigerator n04070727 Toaster n04442312 Coffee pot n03063689 Container Bucket n02909870 Mailbox n03710193 Crate n03127925 7.2 CONV. AUTOENCODER PREDICTIONS 7.3 AVERAGE PRECISION AND AVERAGE RECALL SCORES FOR THE TRAINED MODELS. The scores were sample-wise averaged (i.e., averaged over samples) for Basic + Superordinate CNN, and macro-averaged (i.e.,averaged over categories) for the other models. Model Name Learning Scheme # classes Dimension of Output Average Precision Average Recall Basic labels One-step 30 30 0.90 0.90 Superordinate labels One-step 10 10 0.94 0.94 Basic + Superordinate One-step 40 40 0.91 0.91 Basic then Superordinate Two-step 10 10 0.95 0.95 Superordinate then Basic Two-step 30 30 0.88 0.88 Basic FastText vectors One-step 30 300 0.47 0.50 Superordinate FastText vectors One-step 10 300 0.72 0.75 7.4 OTHER CLUSTERING QUALITY MEASURES SC: Silhouette Coefficients; CH: Calinski-Harabasz Index; DB: Davies-Bouldin Index; BW: Between-to-within class dispersion in cosine distance; The arrow indicates in which direction of metric value represent more dense and well separated clusterings. NEW Model By superordinate category By basic category SC↑ CH↑ DB↓ BW↑ SC↑ CH↑ DB↓ BW↑ Conv. Autoencoder -0.06 166.08 12.24 0.11 -0.09 70.19 15.19 0.15 Basic labels -0.01 427.43 6.45 0.64 -0.02 200.45 7.35 0.84 Superordinate labels 0.00 628.95 5.25 0.71 -0.02 226.09 11.04 0.8 Basic + Superordinate -0.01 534.81 5.79 0.61 -0.02 231.97 7.62 0.78 Basic then Superordinate 0.00 580.74 5.61 0.76 -0.02 233.15 8.62 0.9 Superordinate then Basic -0.01 525.59 5.53 0.75 -0.01 227.35 7.47 0.93 Basic FastText vectors -0.01 1021.60 5.20 0.95 -0.04 423.39 8.75 1.14 Superordinate FastText vectors -0.01 1324.88 5.24 1.11 -0.05 445.75 14.02 1.18 7.5 T-SNE PLOTS FROM OUR TRAINED MODELS FIX (a) Conv. Autoencoder (b) Basic labels (c) Superordinate labels (d) Basic + Superordinate (a) Basic then Superordinate (b) Superordinate then Basic (c) Basic FastText vectors (d) Superordinate FastText vectors
1. What are the main contributions and findings of the paper regarding the comparison of different label types and their impact on the learned representations? 2. How do the autoencoder representation and the supervised representations differ in capturing semantic information? 3. What are some limitations of the study, such as not exploring all possible combinations of label types and taxonomies? 4. How do the imagenet categorical representations compare to other methods in predicting human judgments in the odd-one-out task? 5. Are there any insights gained from comparing the use of word vector representations at both basic and superordinate levels? 6. How might the results change if the odd-one-out comparison involved averaging image representations at the superordinate class level? 7. Can the authors elaborate on why they found it surprising that training with superordinate labels was effective in matching human performance on the triplet odd-one-out task? 8. How do the wordvec representations capture semantic information differently than 1-hot encodings, and what implications does this have for future research? 9. What potential applications or future directions could come from further investigating the relationship between human perceptions and convnet representations? 10. How might the study be improved by expanding the empirical experiments and providing a more expansive analysis of human perceptions?
Review
Review This paper assesses the effects of training an image classifier with different label types: 1-hot coarse-grained labels (10 classes), 1-hot fine grained labels (30 labels which are all subcategories of the 10 coarse-grained categories), word vector representations of the 30 fine-grained labels. They also compare the representations learned from an unsupervised auto-encoder. They assess the different representations through cosine similarity within/between categories and through comparison with human judgments in an odd-one-out task. They find that (i) the auto-encoder representation does not capture the semantic information learned by the supervised representations and (ii) representations learned by the model depend on the label taxonomy, how the targets are represented (1-hot vs. wordvec), and how the model is trained (e.g. fine-grained then coarse grained stages), (iiii) the different representations predict human judgements to differing degrees. the first finding is obvious and I'm not even sure why it needs to be stated -- of course semantics of images are not inherently encoded in the pixels of an image! The second point again, is not surprising . This paper starts to get at some interesting questions but does not follow through. It is also quite confusing to read despite thee simple subject matter. This paper is also missing a related work section! There has been so much word on adding structure to the label space of image classifiers (e.g. models that learn image/text embedding space jointly, models that predict word vectors, graphical model approaches to building in semantic information, etc.) and none of this is discussed. There has also been work on comparing convnet representations to human percepts e.g. https://cocosci.princeton.edu/papers/Peterson_et_al-2018-Cognitive_Science.pdf)and none of this work is discussed! This work needs to be better situated within the context of previous work in this field. Please write a related work section. Detailed comments/questions: - It would be good to add a super-basic model to table 1 for comparison (i.e. first train of coarse level categories and then fine-tine on the more fine-grained taxonomy). - It would be good to compare the use of word vector representations at both the basic and superordinate levels; the 1-hot vs word vector targets and the basic vs superordinate taxonomy seem like orthogonal axess to explore and I'm not sure why the authors didn't test all combinations. - the authors found the imagenet categorical representations were most predictive of human judgements in the odd-one-out task. This seems highly unsurprising since (i) the humans saw images from the Imagenet dataset (not THINGS) and (ii) humans leverage semantic information when making similarity judgements. - What categories had the least inter-rater agreement.. was there any relationship between these categories and the similarity of representations learned by the convnet? - It seems the odd-one-out comparison always involves averaging image representations at the basic category level. In the case where the items come from three different superordinate classes it would bee interesting to see the results when averaging over superordinate classes as well. - In table 3, what does the FastText column just list "true"/"false" rather than accuracies? I would expect this column to show the accuracy when the FastText embeddings for the three words are used to compute similarity. I don't understand what the "true"/"false" is meant to indicate. Also it's not clear to be what the two rows in table three are meant to correspond to? - The authors claim "Surprisingly, the kind of supervised input that proved most effective in matching human performance on the triplet odd-one-out task was training with superordinate labels". This should be qualified to say that, when there are two or more superordinate classes represented in the triplet, the superordinate labels are highly effective when the three items come from three different superordinate classes. I'm also not clear why this would be surprising? Could the authors elaborate? - I'm surprised more space isn't given to discussing the wordvec representations since these should capture some of the semantic information that the 1-hot encodings might miss. In fact, the word vectors targets seem to perform as good as or close to the other representations on the odd-one-out In short, I really like the overall idea of comparing convnet representations with human perceptions of images. However, this work barely scratches the surface of what could be done here and mostly reveals incredibly obvious results. There are so many interesting questions to ask regarding the relationship between how humans perceive similarity and what is encoded in a convnet representation. For example, it would have been very interesting to test the effects of asking the human rates to cue in on different aspects of the image. Focusing on semantic similarity, visual similarity, etc. would all likely give different ratings. ---------------------------------------------------------- Update (in light of rebuttal) I appreciate the authors lengthly and considered response. In particular, the updated related work and expansion of the empirical experiments. While I am more comfortable with this paper being accepted that previously (and have updated by score to "weak accept" to reflect this), I still think the paper has a lot of room for improvement. In particular, I suggest a more expansive analysis of human perceptions and a discussion f the implications of the findings.
ICLR
Title Effects of Linguistic Labels on Learned Visual Representations in Convolutional Neural Networks: Labels matter! Abstract We investigated how the visual representations learned by CNNs is affected by training using different linguistic labels (e.g., basic-level labels only, superordinate-level only, or both at the same time), and how these differentlytrained models compare in their ability to predict the behavior of humans tasked with selecting the object that is most different from two others in a triplet. CNNs used identical architectures and inputs, differing only with respect to the labels used to supervise the training. In the absence of labels, we found that models learned very little categorical structure, suggesting that this structure cannot be extracted purely from the visual input. Surprisingly, models trained with superordinate labels (vehicle, tool, etc.) were most predictive of the behavioral similarity judgments. We conclude that the representations used in an odd-one-out task are highly modulated by semantic information, especially at the superordinate level. 1 INTRODUCTION A critical distinction between human category learning and machine category learning is that only humans have a language. A language means that human learning is not limited to a one-to-one correspondence between a visual input and a category label. Indeed, the users of a language are known to actively seek out categorical relationships between objects and use these relationships in making perceptual similarity judgments and in controlling behavior (Hays, 2000; Lupyan & Lewis, 2017). A premise of our work is that a language provides a semantic structure to labels, and that this structure contributes to the superior efficiency and flexibility of human vision compared to any artificial systems (Pinto et al., 2010). Of course, the computer vision literature on zero-shot and fewshot learning has also made good progress in leveraging semantic information (e.g., image captions, attribute labels, relational information) to increase the generalizability of a model’s performance (Lampert et al., 2013; Sung et al., 2018; Lei Ba et al., 2015). Still, this performance pales in comparison to the human ability for classification, where zero-shot and few-shot learning is the norm, and efficiently-acquired category knowledge is easily generalized to new exemplars (Ashby & Maddox, 2005; Ashby & Ell, 2001). One reason why machine learning lags behind human performance may be because of a failure to fully consider the semantic structure of the ground-truth labels used for training, which can be heavily biased by basic or subordinatelevel categories. This might result in models learning visual feature representations that may not be best for generalization to new, higher-level categories. For example, ImageNet (Deng et al., 2009) contains 120 different dog categories, making the models that are trained using these labels dog experts, creating an interesting but highly atypical semantic structure. Here we study how the linguistic structure of labels influences what is learned by models trained on the same visual inputs. Specifically, we manipulated the labels used to supervise the training of CNN models, each having the same architecture and given identical visual inputs. For example, some of these models were trained with basic-level labels only, some with only superordinate-level labels, and some with both. We then compare visual representations learned by these models, and predict human similarity judgement that we collected using an Odd-one-out task where people had to select which of three object images was the most different. With this dataset, and using categorical representations extracted from our trained models, we could predict human similarity decisions with up to 74% accuracy, which gives us some understanding of the labels needed to produce human-like representations. Our study also broadly benefits both computer vision and behavioral science (e.g., psychology, neuroscience) by suggesting that the semantic structure of labels and datasets should be carefully constructed if the goal is to build vision models that learn visual features representations having the potential for human-like generalization. For behavioral science, this research provides a useful computational framework for understanding the effect of training labels on the human learning of category relationships in the context of thousands of naturalistic images of objects. 2 RELATED WORK NEW 2.1 SEMANTIC LABEL EMBEDDING Although many computer vision models perform well in image classification, generalization tasks such as zero-shot and few-show learning remain challenging. Several studies have attempted to address this problem by embedding semantic information into a model’s representations using text description (Lei Ba et al., 2015), attribute properties (Lampert et al., 2013; Akata et al., 2015; Chen et al., 2018), and relationships between objects (Sung et al., 2018; Annadani & Biswas, 2018). More related to our work, some studies even directly leveraged the linguistic structure of labels. For example, Lei et al. (2017) and Wang & Cottrell (2015) found that training CNNs with coarse-grained labels (e.g., basic-level categories) improve classification accuracy for finer-grained labels (e.g., subordinate-level labels). Also, Frome et al. (2013) re-trained a CNN to predict the word vectors learned by a word embedding model, instead of using one-hot labels, and found improved zeroshot predictions; the model was able to predict thousands of novel categories that were never seen with 18% accuracy. These results suggest that different semantic structures of labels, such as word hierarchy, an order of learning, or semantic similarity between words, affect learned visual representations in CNNs to differing degrees. The current study provides a more systematic investigation of this question. 2.2 UNDERSTANDING HUMAN VISUAL REPRESENTATION The human visual system is unparalleled in its ability to learn feature representations for objects that are robust to large changes in appearance. This tolerance to variability, not only enables accurate object recognition, but also facilitates generalization to new exemplars and categories(DiCarlo et al., 2012). Understanding how humans learn these visual representations is, therefore, an enormously important question, but one that is difficult to study because human learning in the real world is affected and confounded by many factors that are difficult to control experimentally. Recently, work has addressed this issue by computationally modeling and simulating human representation. For example, Hebart et al. (2019) studied human visual representations by fitting probabilistic models to human similarity judgement, and found that human visual representations are composed of semantically interpretable units, with each conveying categorical membership, functionality, and perceptual attributes. Peterson et al. (2018), the study most similar to ours, trained CNNs with labels that differed in hierarchy (e.g., subordinate-level vs. basic-level). They found that training on coarsergrained labels (either as standalone or as coming after finer-grained) induces a more semantically structured representation, and produces more human-like generalization performance. The current study builds on this earlier work by 1) including CNNs trained with no labels (autoencoder) or very fine-grained labels (word vector), 2) testing on a large-scale dataset of human similarity judgement, and 3) comparing superordinate vs. basic levels. 3 MODEL TRAINING Our goal is to study how linguistic labels change the visual representations learned by CNNs. To do this, we trained equivalently designed CNNs for classification, but each with different linguistic labels as ground-truth. In addition, we trained a Convolutional autoencoder, which encodes the images using the same convolutional structure as the other models but, instead of being supervised to predict the class of the image, the objective of this model is to generate an output image that is the same as the input. This Conv. Autoencoder, therefore, represents a model that was not trained with any linguistic label, in contrast to the other models that were each trained with some type of linguistic labels. The description of each model and the labels used for training are provided below. • Conv. Autoencoder: Autoencoder with Convolutional encoder and decoder trained to output the same image as input • Basic labels: CNN model trained with one-hot encoding of basic-level categories, n=30 • Superordinate labels: CNN model trained with one-hot encoding of superordinate-level categories, n=10 • Basic + Superordinate: CNN model trained with two-hot encoding of both basic and superordinate-level categories, n=40(10+30) • Basic then Superordinate: CNN model trained with one-hot encoding of basic-level cat- egories first (n=30), and then finetuned with one-hot encoding of superordinate categories (n=10) • Superordinate then Basic: CNN model trained with one-hot encoding of superordinatelevel categories first (n=10), and then finetuned with one-hot encoding of basic categories (n=30) NEW • Basic FastText vectors: CNN model trained with basic-level word vectors extracted from FastText word embedding model (Bojanowski et al., 2017), dimension=300 • Superordinate FastText vectors: CNN model trained with superordinate-level word vectors extracted from FastText word embedding model (Bojanowski et al., 2017), dimension=300 NEW The identical CNN architecture was used for each model in our labeling manipulation, except for the output layer and its activation function. This general pipeline is described in Figure 1. Our CNN models consist of five blocks of two Convolutional layers, each followed by Max pooling and Batch normalization layers. For all Convolutional and Max pooling operations, zero padding was used to produce output feature maps having the same size as the input. Rectified linear units (ReLU) were used to obtain an activation function after each convolution. The flattened output of the final Convolutional layer, the ”bottleneck” feature that we later extract and use as a model’s visual representation (dim=1568), was then fed into one fully connected dense layer. For Conv. Autoencoder, the same Convolutional architecture was used for encoding and decoding, with the hidden layer in the model (dim=1568) serving as the bottleneck feature for analysis. The final predicted output, ”label vector” is either one-hot or word embedding according to the model’s target labels. Output activation functions differed depending on what label vector was used: a sigmoid function for Basic + Superordinate CNN, a linear function for the Conv. Autoencoder and FastText vectors CNNs, and a softmax for the rest of CNNs. All models were trained and validated on the images of 30 categories from the IMAGENET 2012 dataset (Deng et al., 2009), and tested on images of the same 30 categories from the THINGS dataset (Hebart et al., 2019). These 30 basic-level categories were grouped into 10 higher-level, superordinate categories, which included: ’mammal’, ’bird’, ’insect’, ’fruit’, ’vegetable’, ’vehicle’, ’container’, ’kitchen appliance’, ’musical instrument’, and ’tool’. A list of all 30 categories, with their superordinates, are provided in the Supplementary 7.1. All input images were converted from RGB to BGR and each channel was zero-centered with respect to the ImageNet images. Different loss functions were used for training different models: Binary Crossentropy loss for Basic + Superordinate CNN, and Mean Squared Error loss for both Conv. Autoencoder and FastText vectors CNNs, and Categorical Crossentropy loss was used for the rest of the CNNs. All models were trained using Adam optimization (Kingma & Ba, 2014), with a mini-batch size of 64. During training, early stopping was implemented and the model with the lowest validation loss was used for the following analysis. 4 BEHAVIORAL DATA To compare the visual representations learned by our trained models with those of humans, we collected human similarity judgments in an Odd-one-out task, as in Zheng et al. (2019). Participants were shown three images of objects per trial, a triplet, and were asked to choose which object was most different from the other two. Each triplet consisted of three exemplar objects from the 30 categories used for our model training. All exemplar objects came from Zheng et al. (2019), except for ’crate’, ’hammer’, ’harmonica’, and ’screwdriver’, which were replaced with new exemplars to increase image quality and category representativeness. There are 4060 possible triplets that can be generated from all 30 categories, but we collected behavioral data on only a subset of these to reduce the time and cost of data collection. This subset includes 1) the ten triplets having objects coming from the same superordinate category, e.g., ’orangutan’, ’lion’, ’gazelle’ 2) all 435 triplets where two objects came from the same superordinate category, e.g., ’orangutan’, ’lion’, ’minivan’, and 3) 1375 triplets where all objects came from different categories, e.g., ’orangutan’, ’minivan’, ’lemon’, yielding 1820 unique triplets in total. 51 Amazon Mechanical Turk (AMT) workers participated in this task, each making responses on ∼200 triplets. After removing responses with reaction times below 500ms, we collected 9697 similarity judgments where each triplet was viewed by 5.6 workers, on average (min=4, max=51). 5 EXPERIMENTS 5.1 EVALUATING MODEL PERFORMANCE Although our goal was not to compete with state-of-the-art vision models in classification, we evaluated classification accuracy to see the effects of different labels on learning, thereby confirming that the visual features learned by our models represented category knowledge. To evaluate classification accuracy, we report top@k, the percentage of accurately classified test images where the true class was in the model’s the top K predictions in Table 1. Average precision and average recall over all categories are also reported in the Supplementary 7.3. All metrics were computed on the THINGS test dataset (Hebart et al., 2019). Because the FastText vectors CNN predicts a word vector, not a class, we approximated its classification performance by calculating cosine similarity between predicted and true word vectors and choosing the corresponding class from top@k similarities. Classification results cannot be generated from Conv. Autoencoder, but we include examples of images generated from this model in the Supplementary 7.2 to show that the model worked. As can be seen in Table 1, the top@5 classification accuracy for all trained models was good (all >.82), although there is room for improved classification for FastText vectors CNN. 5.2 EXPLORING VISUAL REPRESENTATIONS To explore how the different linguistic labeling schemes affected the learned visual representations, we extracted and analyzed the bottleneck features from each model (i.e., the 1568-dimensional output of the last Convolutional layer; see Figure 1). We first measured the representational similarity of all objects in the training dataset (IMAGENET 2012; Deng et al., 2009) both between and within each category. These representational distributions were visualized using t-SNE (Maaten & Hinton, 2008) and are attached in Supplementary 7.5. We also analyzed the similarity between categorical FIX representations by plotting a similarity matrix in Figure 2. To create categorical representations, we simply averaged the obtained bottleneck features from all training images per category, creating in a sense ”prototypical” representation for each class. Clustering Quality To investigate how model’s category representations are dense and well separated, we computed the ratio of between-category dispersion and within-category dispersion using cosine distance (1-cosine angle of two feature vectors). Between-category dispersion is the average cosine distance between the center(mean) of different categories. Within-category dispersion is the average cosine distance between every exemplar and the center of each category. Comparing the models in Table 2 revealed that using distributed word vectors as targets, especially Superordinate FastText vectors, produced the highest between-to-within ratio, suggesting the most tightly clustered representations. Interestingly, the Basic + Superordinate CNN model, which was trained with both basic and superordinate labels at the same time, learned more scattered and less distinguishable categorical representations compared to other label-trained models. Lastly, Conv. Autoencoder produced the lowest between-towithin ratio, suggesting that even if a model learns visual features that are good enough to generate input-like images, these visual representations may still be poorly discriminable not only in basic level categories, but also in superordinate level categories. Widely distributed features of Conv. Autoencoder from T-SNE plots in Supplementary 7.5 further supported that the visual input alone is not sufficient to produce any clusterable structure or category representations. A similar trend was observed in the other clustering quality measures as reported in the Supplementary 7.4. Visualization of Categorical Representations Figure 2 visualizes cosine similarity matrices for the category representations learned by the models to explore whether the hierarchical semantic structure of the 30 categories is captured (e.g., every basic-level category belongs to one of ten superordinate categories). For a complete comparison, we also analyzed categorical representations extracted from SPoSE (Zheng et al., 2019), FastText (Bojanowski et al., 2017), and VGG16 early layer (i.e., the output from the first max-pooling layer; Simonyan & Zisserman, 2014). SPoSE model’s category representations were trained on human similarity judgments. This serves as an approximation of human perceived similarity, which can be a combination of semantic and visual similarities. While FastText similarity represents the semantic similarity between categories in basic-level terms, VGG16 early layer similarity represents lowerlevel visual similarity. Whereas little effect of category hierarchy can be seen in VGG16 early layer or Conv. Autoencoder features, the various semantic structure can be observed in the other models (e.g., the emergent bright yellow squares in the figure). Upon closer analysis, these categorical divisions seemed to occur for 1) nature vs. non-nature, 2) edible vs non-edible, and 3) the superordinate categories. Surprisingly, basic-level structures are still observed in Figure 2f (e.g., fine-grained lines in the diagonal), where the model is trained only on the superordinate-level labels. This suggests that guidance from superordinate labels was often as good or better as guidance from much finergrained basic-level labels, which is consistent with the previous finding that training with coarser labels induce more hierarchical structure in visual representations (Peterson et al., 2018) 5.3 PREDICTING HUMAN VISUAL BEHAVIOR Finally, we evaluated how well the visual representations learned by the models could predict human similarity judgement in an Odd-one-out task (See Section 4). For each triplet, responses were generated from the models by comparing the cosine similarities between the three visual object representations and selecting the one most dissimilar from the other two. Three kinds of visual representations were computed and compared: 1) IMAGENET categorical representations, where features were averaged over ∼1000 images per category from the IMAGENET training dataset (Deng et al., 2009), 2) THINGS categorical representations, where features were averaged over ∼10 images per category from the THINGS dataset (Hebart et al., 2019), and 3) Single Exemplar representation, where only one feature per category was generated for the 30 exemplar images used in the behavioral data collection. Together with accuracy from SPoSE (Zheng et al., 2019), FastText (Bojanowski et al., 2017), and VGG16 early layer (Simonyan & Zisserman, 2014), three baseline models of accuracy are reported below, which constitute upper and lower bounds. • Null Acc: Accuracy achieved by predicting that every sample is the most frequent class in the dataset (lower bound, 36%). • Bayes Acc: Accuracy achieved by predicting that every sample is the most frequent class in each unique triplet set (upper bound, 84%). • SPoSE Acc: Accuracy achieved using the SPoSE model (Zheng et al., 2019), a probabilistic model that is directly trained on human responses on all triplets from 1854 THINGS objects (80%). As shown in Figure 3, triplet prediction accuracy was highest when models used IMAGENET category representations and lowest when single exemplar representations were used, even if exemplar image is the one that participant actually saw during the experiment. This shows that when humans do visual similarity ratings, they not only evaluate visual inputs but also use rich and abstract semantic information learned from viewing myriad exemplars. Comparing individual model performance, the highest accuracy (74%) was obtained by the model trained with superordinate labels. This performance is particularly impressive, considering 1) how coarsely grained superordinate labels are (dim=10) compared to Basic labels (dim=30), Basic + Superordinate labels (dim=40), or FastText vectors (dim=300), and 2) that this model is not trained on the actual human triplet data, as was the case for the SPoSE model whose performance was about 80%. These results suggest that the representations used by humans in an Odd-one-out task are highly semantic, reflecting category structure, especially at the superordinate level. However, this may be only because the setting of odd-one-out task has caused people to use superordinate label information. For example, when the participants are given a triplet like (’orangutan’, ’lion’, and ’lemon’), they are prone to choose ’lemon’ because it is the most odd one in superordinate-level. In fact, when the number of superordinate categories in a triplet is two as in the example above, 90% of human responses can be predicted just by telling which one is the odd superordinate category. To investigate how much this task setting would affect the results, we broke down the triplet data based on the number of superordinate categories that a triplet belongs to and reported prediction performance for each split, as shown in the Figure 3. Interestingly, the model trained with superordinate labels alone still performed the best (63%) when superordinate-level information was not very helpful, where all three images in a triplet come from three different superordinate categories, e.g, (’mammal’,’fruit’,’vehicle’). Moreover, the superordinate labels CNN (59%) outperformed the basic labels CNN (56%) even when the images were to be compared at the basic level, where all three images in a triplet come from the same superordinate categories, e.g., (’lemon’,’orange’,’banana’). This implies humans leverage the guidance from coarser superordinate labels in shaping categorical visual representation in both basic and superordinate levels 6 CONCLUSION To be able to generalize to unseen exemplars, any vision system has to learn statistical regularities that make members of the same category more similar to one another than members of other categories. But where do these regularities come from? Are they present in the bottom-up (visual) input to the network? Or does learning the regularities require top-down guidance from category labels? If so, what kinds of labels? To investigate this problem, we manipulated the visual representations learned by CNNs by supervising them using different types of labels and then evaluated these models in their ability to predict human similarity judgments. We found that the type of label used during training profoundly affected the visual representations that were learned, suggesting that there is categorical structure that is not present in the visual input and instead requires top-down guidance in the form of category labels. We also found guidance from superordinate labels was often as good or better as guidance from much finer-grained basic-level labels. Models trained only on superordinate class labels such as ”musical instrument” and ”container” were not only more sensitive to these broader classes than models trained on just basic-level labels, but exposure to just superordinate labels allowed the model to learn within-class structure, distinguishing a harmonica from a flute, and a screwdriver from a hammer. This finding is consistent with the previous work that revealed that training with coarser labels induce more semantically structured visual representations (Peterson et al., 2018). More surprisingly, models supervised using superordinate labels (vehicle, tool, etc.) were best in predicting human performance on a triplet odd-one-out task. CNNs trained with superordinate labels not only outperformed other models when the odd-one-out came from a different superordinate category (which is not surprising), but also when all three objects from a triplet came from different superordinate categories (e.g., when choosing between a banana, a bee, and a screwdriver). Our ongoing work into how different types of labels shape visual representations is exploring the effect of labels specific to different languages (e.g., English vs. Mandarin), and how these may translate to differential human and CNN classification performance. ACKNOWLEDGMENTS Details regarding research support will be added post-review. 7 SUPPLEMENTARY MATERIAL 7.1 LIST OF 30 CATEGORIES Superordinate-level Category Basic-level Category Wordnet ID Mammal Orangutan n02480495 Gazelle n02423022 Lion n02129165 Insect Ant n02219486 Bee n02206856 Grasshopper n02226429 Bird Hummingbird n01833805 Goose n01855672 Vulture n01616318 Vegetable Artichoke n07718747 Cucumber n07718472 Zucchini n07716358 Fruit Orange n07747607 Lemon n07749582 Banana n07753592 Tool Hammer n03481172 Screwdriver n04154565 Shovel n04208210 Vehicle Minivan n03770679 Trolley n04335435 Taxi n02930766 Musical Instrument Drum n03249569 Flute n03372029 Harmonica n03494278 Kitchen Appliance Refrigerator n04070727 Toaster n04442312 Coffee pot n03063689 Container Bucket n02909870 Mailbox n03710193 Crate n03127925 7.2 CONV. AUTOENCODER PREDICTIONS 7.3 AVERAGE PRECISION AND AVERAGE RECALL SCORES FOR THE TRAINED MODELS. The scores were sample-wise averaged (i.e., averaged over samples) for Basic + Superordinate CNN, and macro-averaged (i.e.,averaged over categories) for the other models. Model Name Learning Scheme # classes Dimension of Output Average Precision Average Recall Basic labels One-step 30 30 0.90 0.90 Superordinate labels One-step 10 10 0.94 0.94 Basic + Superordinate One-step 40 40 0.91 0.91 Basic then Superordinate Two-step 10 10 0.95 0.95 Superordinate then Basic Two-step 30 30 0.88 0.88 Basic FastText vectors One-step 30 300 0.47 0.50 Superordinate FastText vectors One-step 10 300 0.72 0.75 7.4 OTHER CLUSTERING QUALITY MEASURES SC: Silhouette Coefficients; CH: Calinski-Harabasz Index; DB: Davies-Bouldin Index; BW: Between-to-within class dispersion in cosine distance; The arrow indicates in which direction of metric value represent more dense and well separated clusterings. NEW Model By superordinate category By basic category SC↑ CH↑ DB↓ BW↑ SC↑ CH↑ DB↓ BW↑ Conv. Autoencoder -0.06 166.08 12.24 0.11 -0.09 70.19 15.19 0.15 Basic labels -0.01 427.43 6.45 0.64 -0.02 200.45 7.35 0.84 Superordinate labels 0.00 628.95 5.25 0.71 -0.02 226.09 11.04 0.8 Basic + Superordinate -0.01 534.81 5.79 0.61 -0.02 231.97 7.62 0.78 Basic then Superordinate 0.00 580.74 5.61 0.76 -0.02 233.15 8.62 0.9 Superordinate then Basic -0.01 525.59 5.53 0.75 -0.01 227.35 7.47 0.93 Basic FastText vectors -0.01 1021.60 5.20 0.95 -0.04 423.39 8.75 1.14 Superordinate FastText vectors -0.01 1324.88 5.24 1.11 -0.05 445.75 14.02 1.18 7.5 T-SNE PLOTS FROM OUR TRAINED MODELS FIX (a) Conv. Autoencoder (b) Basic labels (c) Superordinate labels (d) Basic + Superordinate (a) Basic then Superordinate (b) Superordinate then Basic (c) Basic FastText vectors (d) Superordinate FastText vectors
1. What is the focus of the paper regarding CNNs and labeling schemes? 2. What are the strengths of the paper, particularly in its comparative study and use of human judgment dataset? 3. What are the limitations of the paper, especially regarding the interpretation of representations learned? 4. Do you have any concerns about the conclusion drawn from the study?
Review
Review The authors conduct a comparative study of several variants of CNNs trained on imagenent things category with different types of labeling schemes (direct, superordinate, word2vec embedding targets, etc.) They also use a human judgement dataset based on odd-one-out classification for triplets of inputs as comparison to evaluate whether the CNNs are able to capture the linguistic structure in the label categories as determined by the relation of the superordinate labels to the basic labels. The authors used the t-SNE embeddings to visualize the representations learned and evaluate whether these cluster related classes close enough. Not suprisingly, training with the word2vec targets produced the best representations for similarity between/within category. Interestingly, the autoencoder failed to learn representations that are easily interpretable by the analysis tools they were using. This is an interesting study. The core claim being made as follows: "The representations learned by the models are shaped enormously by the kinds of supervision the models get suggesting that much of the categorical structure is not present in the visual input, but requires top-down guidance in the form of category labels. " The fact that the representations being learned are shaped strongly by the supervision is probably not surprising or in contention. However, it is not clear that the representations being learned can be exhaustively interpreted by convenient visualization tools. In my opinion, absence of evidence here is not clearly an evidence of absence. However, I still think these are interesting analyses so I am giving weak accept.
ICLR
Title Effects of Linguistic Labels on Learned Visual Representations in Convolutional Neural Networks: Labels matter! Abstract We investigated how the visual representations learned by CNNs is affected by training using different linguistic labels (e.g., basic-level labels only, superordinate-level only, or both at the same time), and how these differentlytrained models compare in their ability to predict the behavior of humans tasked with selecting the object that is most different from two others in a triplet. CNNs used identical architectures and inputs, differing only with respect to the labels used to supervise the training. In the absence of labels, we found that models learned very little categorical structure, suggesting that this structure cannot be extracted purely from the visual input. Surprisingly, models trained with superordinate labels (vehicle, tool, etc.) were most predictive of the behavioral similarity judgments. We conclude that the representations used in an odd-one-out task are highly modulated by semantic information, especially at the superordinate level. 1 INTRODUCTION A critical distinction between human category learning and machine category learning is that only humans have a language. A language means that human learning is not limited to a one-to-one correspondence between a visual input and a category label. Indeed, the users of a language are known to actively seek out categorical relationships between objects and use these relationships in making perceptual similarity judgments and in controlling behavior (Hays, 2000; Lupyan & Lewis, 2017). A premise of our work is that a language provides a semantic structure to labels, and that this structure contributes to the superior efficiency and flexibility of human vision compared to any artificial systems (Pinto et al., 2010). Of course, the computer vision literature on zero-shot and fewshot learning has also made good progress in leveraging semantic information (e.g., image captions, attribute labels, relational information) to increase the generalizability of a model’s performance (Lampert et al., 2013; Sung et al., 2018; Lei Ba et al., 2015). Still, this performance pales in comparison to the human ability for classification, where zero-shot and few-shot learning is the norm, and efficiently-acquired category knowledge is easily generalized to new exemplars (Ashby & Maddox, 2005; Ashby & Ell, 2001). One reason why machine learning lags behind human performance may be because of a failure to fully consider the semantic structure of the ground-truth labels used for training, which can be heavily biased by basic or subordinatelevel categories. This might result in models learning visual feature representations that may not be best for generalization to new, higher-level categories. For example, ImageNet (Deng et al., 2009) contains 120 different dog categories, making the models that are trained using these labels dog experts, creating an interesting but highly atypical semantic structure. Here we study how the linguistic structure of labels influences what is learned by models trained on the same visual inputs. Specifically, we manipulated the labels used to supervise the training of CNN models, each having the same architecture and given identical visual inputs. For example, some of these models were trained with basic-level labels only, some with only superordinate-level labels, and some with both. We then compare visual representations learned by these models, and predict human similarity judgement that we collected using an Odd-one-out task where people had to select which of three object images was the most different. With this dataset, and using categorical representations extracted from our trained models, we could predict human similarity decisions with up to 74% accuracy, which gives us some understanding of the labels needed to produce human-like representations. Our study also broadly benefits both computer vision and behavioral science (e.g., psychology, neuroscience) by suggesting that the semantic structure of labels and datasets should be carefully constructed if the goal is to build vision models that learn visual features representations having the potential for human-like generalization. For behavioral science, this research provides a useful computational framework for understanding the effect of training labels on the human learning of category relationships in the context of thousands of naturalistic images of objects. 2 RELATED WORK NEW 2.1 SEMANTIC LABEL EMBEDDING Although many computer vision models perform well in image classification, generalization tasks such as zero-shot and few-show learning remain challenging. Several studies have attempted to address this problem by embedding semantic information into a model’s representations using text description (Lei Ba et al., 2015), attribute properties (Lampert et al., 2013; Akata et al., 2015; Chen et al., 2018), and relationships between objects (Sung et al., 2018; Annadani & Biswas, 2018). More related to our work, some studies even directly leveraged the linguistic structure of labels. For example, Lei et al. (2017) and Wang & Cottrell (2015) found that training CNNs with coarse-grained labels (e.g., basic-level categories) improve classification accuracy for finer-grained labels (e.g., subordinate-level labels). Also, Frome et al. (2013) re-trained a CNN to predict the word vectors learned by a word embedding model, instead of using one-hot labels, and found improved zeroshot predictions; the model was able to predict thousands of novel categories that were never seen with 18% accuracy. These results suggest that different semantic structures of labels, such as word hierarchy, an order of learning, or semantic similarity between words, affect learned visual representations in CNNs to differing degrees. The current study provides a more systematic investigation of this question. 2.2 UNDERSTANDING HUMAN VISUAL REPRESENTATION The human visual system is unparalleled in its ability to learn feature representations for objects that are robust to large changes in appearance. This tolerance to variability, not only enables accurate object recognition, but also facilitates generalization to new exemplars and categories(DiCarlo et al., 2012). Understanding how humans learn these visual representations is, therefore, an enormously important question, but one that is difficult to study because human learning in the real world is affected and confounded by many factors that are difficult to control experimentally. Recently, work has addressed this issue by computationally modeling and simulating human representation. For example, Hebart et al. (2019) studied human visual representations by fitting probabilistic models to human similarity judgement, and found that human visual representations are composed of semantically interpretable units, with each conveying categorical membership, functionality, and perceptual attributes. Peterson et al. (2018), the study most similar to ours, trained CNNs with labels that differed in hierarchy (e.g., subordinate-level vs. basic-level). They found that training on coarsergrained labels (either as standalone or as coming after finer-grained) induces a more semantically structured representation, and produces more human-like generalization performance. The current study builds on this earlier work by 1) including CNNs trained with no labels (autoencoder) or very fine-grained labels (word vector), 2) testing on a large-scale dataset of human similarity judgement, and 3) comparing superordinate vs. basic levels. 3 MODEL TRAINING Our goal is to study how linguistic labels change the visual representations learned by CNNs. To do this, we trained equivalently designed CNNs for classification, but each with different linguistic labels as ground-truth. In addition, we trained a Convolutional autoencoder, which encodes the images using the same convolutional structure as the other models but, instead of being supervised to predict the class of the image, the objective of this model is to generate an output image that is the same as the input. This Conv. Autoencoder, therefore, represents a model that was not trained with any linguistic label, in contrast to the other models that were each trained with some type of linguistic labels. The description of each model and the labels used for training are provided below. • Conv. Autoencoder: Autoencoder with Convolutional encoder and decoder trained to output the same image as input • Basic labels: CNN model trained with one-hot encoding of basic-level categories, n=30 • Superordinate labels: CNN model trained with one-hot encoding of superordinate-level categories, n=10 • Basic + Superordinate: CNN model trained with two-hot encoding of both basic and superordinate-level categories, n=40(10+30) • Basic then Superordinate: CNN model trained with one-hot encoding of basic-level cat- egories first (n=30), and then finetuned with one-hot encoding of superordinate categories (n=10) • Superordinate then Basic: CNN model trained with one-hot encoding of superordinatelevel categories first (n=10), and then finetuned with one-hot encoding of basic categories (n=30) NEW • Basic FastText vectors: CNN model trained with basic-level word vectors extracted from FastText word embedding model (Bojanowski et al., 2017), dimension=300 • Superordinate FastText vectors: CNN model trained with superordinate-level word vectors extracted from FastText word embedding model (Bojanowski et al., 2017), dimension=300 NEW The identical CNN architecture was used for each model in our labeling manipulation, except for the output layer and its activation function. This general pipeline is described in Figure 1. Our CNN models consist of five blocks of two Convolutional layers, each followed by Max pooling and Batch normalization layers. For all Convolutional and Max pooling operations, zero padding was used to produce output feature maps having the same size as the input. Rectified linear units (ReLU) were used to obtain an activation function after each convolution. The flattened output of the final Convolutional layer, the ”bottleneck” feature that we later extract and use as a model’s visual representation (dim=1568), was then fed into one fully connected dense layer. For Conv. Autoencoder, the same Convolutional architecture was used for encoding and decoding, with the hidden layer in the model (dim=1568) serving as the bottleneck feature for analysis. The final predicted output, ”label vector” is either one-hot or word embedding according to the model’s target labels. Output activation functions differed depending on what label vector was used: a sigmoid function for Basic + Superordinate CNN, a linear function for the Conv. Autoencoder and FastText vectors CNNs, and a softmax for the rest of CNNs. All models were trained and validated on the images of 30 categories from the IMAGENET 2012 dataset (Deng et al., 2009), and tested on images of the same 30 categories from the THINGS dataset (Hebart et al., 2019). These 30 basic-level categories were grouped into 10 higher-level, superordinate categories, which included: ’mammal’, ’bird’, ’insect’, ’fruit’, ’vegetable’, ’vehicle’, ’container’, ’kitchen appliance’, ’musical instrument’, and ’tool’. A list of all 30 categories, with their superordinates, are provided in the Supplementary 7.1. All input images were converted from RGB to BGR and each channel was zero-centered with respect to the ImageNet images. Different loss functions were used for training different models: Binary Crossentropy loss for Basic + Superordinate CNN, and Mean Squared Error loss for both Conv. Autoencoder and FastText vectors CNNs, and Categorical Crossentropy loss was used for the rest of the CNNs. All models were trained using Adam optimization (Kingma & Ba, 2014), with a mini-batch size of 64. During training, early stopping was implemented and the model with the lowest validation loss was used for the following analysis. 4 BEHAVIORAL DATA To compare the visual representations learned by our trained models with those of humans, we collected human similarity judgments in an Odd-one-out task, as in Zheng et al. (2019). Participants were shown three images of objects per trial, a triplet, and were asked to choose which object was most different from the other two. Each triplet consisted of three exemplar objects from the 30 categories used for our model training. All exemplar objects came from Zheng et al. (2019), except for ’crate’, ’hammer’, ’harmonica’, and ’screwdriver’, which were replaced with new exemplars to increase image quality and category representativeness. There are 4060 possible triplets that can be generated from all 30 categories, but we collected behavioral data on only a subset of these to reduce the time and cost of data collection. This subset includes 1) the ten triplets having objects coming from the same superordinate category, e.g., ’orangutan’, ’lion’, ’gazelle’ 2) all 435 triplets where two objects came from the same superordinate category, e.g., ’orangutan’, ’lion’, ’minivan’, and 3) 1375 triplets where all objects came from different categories, e.g., ’orangutan’, ’minivan’, ’lemon’, yielding 1820 unique triplets in total. 51 Amazon Mechanical Turk (AMT) workers participated in this task, each making responses on ∼200 triplets. After removing responses with reaction times below 500ms, we collected 9697 similarity judgments where each triplet was viewed by 5.6 workers, on average (min=4, max=51). 5 EXPERIMENTS 5.1 EVALUATING MODEL PERFORMANCE Although our goal was not to compete with state-of-the-art vision models in classification, we evaluated classification accuracy to see the effects of different labels on learning, thereby confirming that the visual features learned by our models represented category knowledge. To evaluate classification accuracy, we report top@k, the percentage of accurately classified test images where the true class was in the model’s the top K predictions in Table 1. Average precision and average recall over all categories are also reported in the Supplementary 7.3. All metrics were computed on the THINGS test dataset (Hebart et al., 2019). Because the FastText vectors CNN predicts a word vector, not a class, we approximated its classification performance by calculating cosine similarity between predicted and true word vectors and choosing the corresponding class from top@k similarities. Classification results cannot be generated from Conv. Autoencoder, but we include examples of images generated from this model in the Supplementary 7.2 to show that the model worked. As can be seen in Table 1, the top@5 classification accuracy for all trained models was good (all >.82), although there is room for improved classification for FastText vectors CNN. 5.2 EXPLORING VISUAL REPRESENTATIONS To explore how the different linguistic labeling schemes affected the learned visual representations, we extracted and analyzed the bottleneck features from each model (i.e., the 1568-dimensional output of the last Convolutional layer; see Figure 1). We first measured the representational similarity of all objects in the training dataset (IMAGENET 2012; Deng et al., 2009) both between and within each category. These representational distributions were visualized using t-SNE (Maaten & Hinton, 2008) and are attached in Supplementary 7.5. We also analyzed the similarity between categorical FIX representations by plotting a similarity matrix in Figure 2. To create categorical representations, we simply averaged the obtained bottleneck features from all training images per category, creating in a sense ”prototypical” representation for each class. Clustering Quality To investigate how model’s category representations are dense and well separated, we computed the ratio of between-category dispersion and within-category dispersion using cosine distance (1-cosine angle of two feature vectors). Between-category dispersion is the average cosine distance between the center(mean) of different categories. Within-category dispersion is the average cosine distance between every exemplar and the center of each category. Comparing the models in Table 2 revealed that using distributed word vectors as targets, especially Superordinate FastText vectors, produced the highest between-to-within ratio, suggesting the most tightly clustered representations. Interestingly, the Basic + Superordinate CNN model, which was trained with both basic and superordinate labels at the same time, learned more scattered and less distinguishable categorical representations compared to other label-trained models. Lastly, Conv. Autoencoder produced the lowest between-towithin ratio, suggesting that even if a model learns visual features that are good enough to generate input-like images, these visual representations may still be poorly discriminable not only in basic level categories, but also in superordinate level categories. Widely distributed features of Conv. Autoencoder from T-SNE plots in Supplementary 7.5 further supported that the visual input alone is not sufficient to produce any clusterable structure or category representations. A similar trend was observed in the other clustering quality measures as reported in the Supplementary 7.4. Visualization of Categorical Representations Figure 2 visualizes cosine similarity matrices for the category representations learned by the models to explore whether the hierarchical semantic structure of the 30 categories is captured (e.g., every basic-level category belongs to one of ten superordinate categories). For a complete comparison, we also analyzed categorical representations extracted from SPoSE (Zheng et al., 2019), FastText (Bojanowski et al., 2017), and VGG16 early layer (i.e., the output from the first max-pooling layer; Simonyan & Zisserman, 2014). SPoSE model’s category representations were trained on human similarity judgments. This serves as an approximation of human perceived similarity, which can be a combination of semantic and visual similarities. While FastText similarity represents the semantic similarity between categories in basic-level terms, VGG16 early layer similarity represents lowerlevel visual similarity. Whereas little effect of category hierarchy can be seen in VGG16 early layer or Conv. Autoencoder features, the various semantic structure can be observed in the other models (e.g., the emergent bright yellow squares in the figure). Upon closer analysis, these categorical divisions seemed to occur for 1) nature vs. non-nature, 2) edible vs non-edible, and 3) the superordinate categories. Surprisingly, basic-level structures are still observed in Figure 2f (e.g., fine-grained lines in the diagonal), where the model is trained only on the superordinate-level labels. This suggests that guidance from superordinate labels was often as good or better as guidance from much finergrained basic-level labels, which is consistent with the previous finding that training with coarser labels induce more hierarchical structure in visual representations (Peterson et al., 2018) 5.3 PREDICTING HUMAN VISUAL BEHAVIOR Finally, we evaluated how well the visual representations learned by the models could predict human similarity judgement in an Odd-one-out task (See Section 4). For each triplet, responses were generated from the models by comparing the cosine similarities between the three visual object representations and selecting the one most dissimilar from the other two. Three kinds of visual representations were computed and compared: 1) IMAGENET categorical representations, where features were averaged over ∼1000 images per category from the IMAGENET training dataset (Deng et al., 2009), 2) THINGS categorical representations, where features were averaged over ∼10 images per category from the THINGS dataset (Hebart et al., 2019), and 3) Single Exemplar representation, where only one feature per category was generated for the 30 exemplar images used in the behavioral data collection. Together with accuracy from SPoSE (Zheng et al., 2019), FastText (Bojanowski et al., 2017), and VGG16 early layer (Simonyan & Zisserman, 2014), three baseline models of accuracy are reported below, which constitute upper and lower bounds. • Null Acc: Accuracy achieved by predicting that every sample is the most frequent class in the dataset (lower bound, 36%). • Bayes Acc: Accuracy achieved by predicting that every sample is the most frequent class in each unique triplet set (upper bound, 84%). • SPoSE Acc: Accuracy achieved using the SPoSE model (Zheng et al., 2019), a probabilistic model that is directly trained on human responses on all triplets from 1854 THINGS objects (80%). As shown in Figure 3, triplet prediction accuracy was highest when models used IMAGENET category representations and lowest when single exemplar representations were used, even if exemplar image is the one that participant actually saw during the experiment. This shows that when humans do visual similarity ratings, they not only evaluate visual inputs but also use rich and abstract semantic information learned from viewing myriad exemplars. Comparing individual model performance, the highest accuracy (74%) was obtained by the model trained with superordinate labels. This performance is particularly impressive, considering 1) how coarsely grained superordinate labels are (dim=10) compared to Basic labels (dim=30), Basic + Superordinate labels (dim=40), or FastText vectors (dim=300), and 2) that this model is not trained on the actual human triplet data, as was the case for the SPoSE model whose performance was about 80%. These results suggest that the representations used by humans in an Odd-one-out task are highly semantic, reflecting category structure, especially at the superordinate level. However, this may be only because the setting of odd-one-out task has caused people to use superordinate label information. For example, when the participants are given a triplet like (’orangutan’, ’lion’, and ’lemon’), they are prone to choose ’lemon’ because it is the most odd one in superordinate-level. In fact, when the number of superordinate categories in a triplet is two as in the example above, 90% of human responses can be predicted just by telling which one is the odd superordinate category. To investigate how much this task setting would affect the results, we broke down the triplet data based on the number of superordinate categories that a triplet belongs to and reported prediction performance for each split, as shown in the Figure 3. Interestingly, the model trained with superordinate labels alone still performed the best (63%) when superordinate-level information was not very helpful, where all three images in a triplet come from three different superordinate categories, e.g, (’mammal’,’fruit’,’vehicle’). Moreover, the superordinate labels CNN (59%) outperformed the basic labels CNN (56%) even when the images were to be compared at the basic level, where all three images in a triplet come from the same superordinate categories, e.g., (’lemon’,’orange’,’banana’). This implies humans leverage the guidance from coarser superordinate labels in shaping categorical visual representation in both basic and superordinate levels 6 CONCLUSION To be able to generalize to unseen exemplars, any vision system has to learn statistical regularities that make members of the same category more similar to one another than members of other categories. But where do these regularities come from? Are they present in the bottom-up (visual) input to the network? Or does learning the regularities require top-down guidance from category labels? If so, what kinds of labels? To investigate this problem, we manipulated the visual representations learned by CNNs by supervising them using different types of labels and then evaluated these models in their ability to predict human similarity judgments. We found that the type of label used during training profoundly affected the visual representations that were learned, suggesting that there is categorical structure that is not present in the visual input and instead requires top-down guidance in the form of category labels. We also found guidance from superordinate labels was often as good or better as guidance from much finer-grained basic-level labels. Models trained only on superordinate class labels such as ”musical instrument” and ”container” were not only more sensitive to these broader classes than models trained on just basic-level labels, but exposure to just superordinate labels allowed the model to learn within-class structure, distinguishing a harmonica from a flute, and a screwdriver from a hammer. This finding is consistent with the previous work that revealed that training with coarser labels induce more semantically structured visual representations (Peterson et al., 2018). More surprisingly, models supervised using superordinate labels (vehicle, tool, etc.) were best in predicting human performance on a triplet odd-one-out task. CNNs trained with superordinate labels not only outperformed other models when the odd-one-out came from a different superordinate category (which is not surprising), but also when all three objects from a triplet came from different superordinate categories (e.g., when choosing between a banana, a bee, and a screwdriver). Our ongoing work into how different types of labels shape visual representations is exploring the effect of labels specific to different languages (e.g., English vs. Mandarin), and how these may translate to differential human and CNN classification performance. ACKNOWLEDGMENTS Details regarding research support will be added post-review. 7 SUPPLEMENTARY MATERIAL 7.1 LIST OF 30 CATEGORIES Superordinate-level Category Basic-level Category Wordnet ID Mammal Orangutan n02480495 Gazelle n02423022 Lion n02129165 Insect Ant n02219486 Bee n02206856 Grasshopper n02226429 Bird Hummingbird n01833805 Goose n01855672 Vulture n01616318 Vegetable Artichoke n07718747 Cucumber n07718472 Zucchini n07716358 Fruit Orange n07747607 Lemon n07749582 Banana n07753592 Tool Hammer n03481172 Screwdriver n04154565 Shovel n04208210 Vehicle Minivan n03770679 Trolley n04335435 Taxi n02930766 Musical Instrument Drum n03249569 Flute n03372029 Harmonica n03494278 Kitchen Appliance Refrigerator n04070727 Toaster n04442312 Coffee pot n03063689 Container Bucket n02909870 Mailbox n03710193 Crate n03127925 7.2 CONV. AUTOENCODER PREDICTIONS 7.3 AVERAGE PRECISION AND AVERAGE RECALL SCORES FOR THE TRAINED MODELS. The scores were sample-wise averaged (i.e., averaged over samples) for Basic + Superordinate CNN, and macro-averaged (i.e.,averaged over categories) for the other models. Model Name Learning Scheme # classes Dimension of Output Average Precision Average Recall Basic labels One-step 30 30 0.90 0.90 Superordinate labels One-step 10 10 0.94 0.94 Basic + Superordinate One-step 40 40 0.91 0.91 Basic then Superordinate Two-step 10 10 0.95 0.95 Superordinate then Basic Two-step 30 30 0.88 0.88 Basic FastText vectors One-step 30 300 0.47 0.50 Superordinate FastText vectors One-step 10 300 0.72 0.75 7.4 OTHER CLUSTERING QUALITY MEASURES SC: Silhouette Coefficients; CH: Calinski-Harabasz Index; DB: Davies-Bouldin Index; BW: Between-to-within class dispersion in cosine distance; The arrow indicates in which direction of metric value represent more dense and well separated clusterings. NEW Model By superordinate category By basic category SC↑ CH↑ DB↓ BW↑ SC↑ CH↑ DB↓ BW↑ Conv. Autoencoder -0.06 166.08 12.24 0.11 -0.09 70.19 15.19 0.15 Basic labels -0.01 427.43 6.45 0.64 -0.02 200.45 7.35 0.84 Superordinate labels 0.00 628.95 5.25 0.71 -0.02 226.09 11.04 0.8 Basic + Superordinate -0.01 534.81 5.79 0.61 -0.02 231.97 7.62 0.78 Basic then Superordinate 0.00 580.74 5.61 0.76 -0.02 233.15 8.62 0.9 Superordinate then Basic -0.01 525.59 5.53 0.75 -0.01 227.35 7.47 0.93 Basic FastText vectors -0.01 1021.60 5.20 0.95 -0.04 423.39 8.75 1.14 Superordinate FastText vectors -0.01 1324.88 5.24 1.11 -0.05 445.75 14.02 1.18 7.5 T-SNE PLOTS FROM OUR TRAINED MODELS FIX (a) Conv. Autoencoder (b) Basic labels (c) Superordinate labels (d) Basic + Superordinate (a) Basic then Superordinate (b) Superordinate then Basic (c) Basic FastText vectors (d) Superordinate FastText vectors
1. What is the focus of the paper regarding semantic information in classification problems? 2. What are the strengths and weaknesses of the proposed approach, particularly in its novelty and impact? 3. Do you have any concerns or suggestions regarding the related work and its description? 4. How can the authors improve the experiments to provide more compelling information? 5. Are there any specific areas where the writing and grammar need improvement?
Review
Review Summary: This paper demonstrates the importance of labels at various levels (no label, basic level label, and superordinate level) as well as in combination to determine the importance of semantic information in classification problems. They train an identical CNN architecture either as an autoencoder (no labels), with the basic label, with the subordinate label, with the basic and subordinate labels, and with basic labels which are fine-tuned with one-hot encodings of superordinate labels, as well as with word vectors. Classification accuracy, t-SNE, cosine similarity matrices and predictions on a human behavior task are used to evaluate the differences across labels types. The authors find that superordinate labels are helpful and important for classification problems. Major comments: - Authors need to include more related work and describe the main related paper they mention (Peterson et al 2018) as well as describe how their work fits in with previous work - While the idea here is novel and impactful, the experiments used to explain the importance of superordinate labels do have not much compelling information and are not well described - 4.2 plots for visualization are mentioned to be in the appendix, but are not there Minor comments: - Fig2 large subordinate group text would help - Lots of typos throughout and grammar mistakes o Typo ‘use VGG16’ and then ‘Vgg16’ in same paragraph bottom of page 4 o Typo top of page 2 “Convolutional neural network(CNN)” o Appendix list – ‘banna’ typo under Fruit o Page 1 intro ‘for both behavioral and computer vision’ doesn’t really make sense o Page 3 top section ‘new one’ should be ‘new ones’ o Bottom of page 3 ‘room from improvement’ o Last line of conclusion – ‘classificacation’ Consensus: This is a very interesting and potentially impactful idea, but the experiments used to defend and explain the importance of superordinate labels are relatively weak. Significant work on writing and experimental side should be complete, but because this is novel and important work for classification, with some serious revisions, I would suggest accepting this paper.
ICLR
Title HexaConv Abstract The effectiveness of convolutional neural networks stems in large part from their ability to exploit the translation invariance that is inherent in many learning problems. Recently, it was shown that CNNs can exploit other sources of invariance, such as rotation invariance, by using group convolutions instead of planar convolutions. However, for reasons of performance and ease of implementation, it has been necessary to limit the group convolution to transformations that can be applied to the filters without interpolation. Thus, for images with square pixels, only integer translations, rotations by multiples of 90 degrees, and reflections are admissible. Whereas the square tiling provides a 4-fold rotational symmetry, a hexagonal tiling of the plane has a 6-fold rotational symmetry. In this paper we show how one can efficiently implement planar convolution and group convolution over hexagonal lattices, by re-using existing highly optimized convolution routines. We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget. Furthermore, we find that the increased degree of symmetry of the hexagonal grid increases the effectiveness of group convolutions, by allowing for more parameter sharing. We show that our method significantly outperforms conventional CNNs on the AID aerial scene classification dataset, even outperforming ImageNet pretrained models. 1 INTRODUCTION For sensory perception tasks, neural networks have mostly replaced handcrafted features. Instead of defining features by hand using domain knowledge, it is now possible to learn them, resulting in improved accuracy and saving a considerable amount of work. However, successful generalization is still critically dependent on the inductive bias encoded in the network architecture, whether this bias is understood by the network architect or not. The canonical example of a successful network architecture is the Convolutional Neural Network (CNN, ConvNet). Through convolutional weight sharing, these networks exploit the fact that a given visual pattern may appear in different locations in the image with approximately equal likelihood. Furthermore, this translation symmetry is preserved throughout the network, because a translation of the input image leads to a translation of the feature maps at each layer: convolution is translation equivariant. Very often, the true label function (the mapping from image to label that we wish to learn) is invariant to more transformations than just translations. Rotations are an obvious example, but standard translational convolutions cannot exploit this symmetry, because they are not rotation equivariant. As it turns out, a convolution operation can be defined for almost any group of transformation — not just translations. By simply replacing convolutions with group convolutions (wherein filters are not ∗Equal contribution just shifted but transformed by a larger group; see Figure 1), convolutional networks can be made equivariant to and exploit richer groups of symmetries (Cohen & Welling, 2016). Furthermore, this technique was shown to be more effective than data augmentation. Although the general theory of such group equivariant convolutional networks (G-CNNs) is applicable to any reasonably well-behaved group of symmetries (including at least all finite, infinite discrete, and continuous compact groups), the group convolution is easiest to implement when all the transformations in the group of interest are also symmetries of the grid of pixels. For this reason, G-CNNs were initially implemented only for the discrete groups p4 and p4m which include integer translations, rotations by multiples of 90 degrees, and, in the case of p4m, mirror reflections — the symmetries of a square lattice. The main hurdle that stands in the way of a practical implementation of group convolution for a continuous group, such as the roto-translation group SE(2), is the fact that it requires interpolation in order to rotate the filters. Although it is possible to use bilinear interpolation in a neural network (Jaderberg et al., 2015), it is somewhat more difficult to implement, computationally expensive, and most importantly, may lead to numerical approximation errors that can accumulate with network depth. This has led us to consider the hexagonal grid, wherein it is possible to rotate a filter by any multiple of 60 degrees, without interpolation. This allows use to define group convolutions for the groups p6 and p6m, which contain integer translations, rotations with multiples of 60 degrees, and mirroring for p6m. To our surprise, we found that even for translational convolution, a hexagonal pixelation appears to have significant advantages over a square pixelation. Specifically, hexagonal pixelation is more efficient for signals that are band limited to a circular area in the Fourier plane (Petersen & Middleton, 1962), and hexagonal pixelation exhibits improved isotropic properties such as twelve-fold symmetry and six-connectivity, compared to eight-fold symmetry and four-connectivity of square pixels (Mersereau, 1979; Condat & Van De Ville, 2007). Furthermore, we found that using small, approximately round hexagonal filters with 7 parameters works better than square 3× 3 filters when the number of parameters is kept the same. As hypothesized, group convolution is also more effective on a hexagonal lattice, due to the increase in weight sharing afforded by the higher degree of rotational symmetry. Indeed, the general pattern we find is that the larger the group of symmetries being exploited, the better the accuracy: p6-convolution outperforms p4-convolution, which in turn outperforms ordinary translational convolution. In order to use hexagonal pixelations in convolutional networks, a number of challenges must be addressed. Firstly, images sampled on a square lattice need to be resampled on a hexagonal lattice. This is easily achieved using bilinear interpolation. Secondly, the hexagonal images must be stored in a way that is both memory efficient and allows for a fast implementation of hexagonal convolution. To this end, we review various indexing schemes for the hexagonal lattice, and show that for some of them, we can leverage highly optimized square convolution routines to perform the hexagonal convolution. Finally, we show how to efficiently implement the filter transformation step of the group convolution on a hexagonal lattice. We evaluate our method on the CIFAR-10 benchmark and on the Aerial Image Dataset (AID) (Xia et al., 2017). Aerial images are one of the many image types where the label function is invariant to rotations: One expects that rotating an aerial image does not change the label. In situations where the number of examples is limited, data efficient learning is important. Our experiments demonstrate that group convolutions systematically improve performance. The method outperforms the baseline model pretrained on ImageNet, as well as comparable architectures with the same number of parameters. Source code of G-HexaConvs is available on Github: https://github.com/ehoogeboom/hexaconv. The remainder of this paper is organized as follows: In Section 2 we summarize the theory of group equivariant networks. Section 3 provides an overview of different coordinate systems on the hexagonal grid, Section 4 discusses the implementation details of the hexagonal G-convolutions, in Section 5 we introduce the experiments and present results, Section 6 gives an overview of other related work after which we discuss our findings and conclude. 2 GROUP EQUIVARIANT CNNS In this section we review the theory of G-CNNs, as presented by Cohen & Welling (2016). To begin, recall that normal convolutions are translation equivariant1. More formally, let Lt denote the operator that translates a feature map f : Z2 → RK by t ∈ Z2, and let ψ denote a filter. Translation equivariance is then expressed as: [[Ltf ] ? ψ](x) = [Lt[f ? ψ]](x). (1) In words: translation followed by convolution equals convolution followed by a translation. If instead we apply a rotation r, we obtain: [[Lrf ] ? ψ](x) = Lr[f ? [Lr−1ψ]](x). (2) That is, the convolution of a rotated image Lrf by a filter ψ equals the rotation of a convolved image f by a inversely rotated filter Lr−1ψ. There is no way to express [Lrf ] ? ψ in terms of f ? ψ, so convolution is not rotation equivariant. The convolution is computed by shifting a filter over an image. By changing the translation to a transformation from a larger group G, a G-convolution is obtained. Mathematically, the GConvolution for a group G and input space X (e.g. the square or hexagonal lattice) is defined as: [f ?g ψ](g) = ∑ h∈X ∑ k fk(h)ψk(g −1h), (3) where k denotes the input channel, fk and ψk are signals defined on X , and g is a transformation in G. The standard (translational) convolution operation is a special case of the G-convolution for X = G = Z2, the translation group. In a typical G-CNN, the input is an image, so we have X = Z2 for the first layer, while G could be a larger group such as a group of rotations and translations. Because the feature map f ?g ψ is indexed by g ∈ G, in higher layers the feature maps and filters are functions on G, i.e. we have X = G. One can show that the G-convolution is equivariant to transformations u ∈ G: [[Luf ] ?g ψ](g) = [Lu[f ?g ψ]](g). (4) Because all layers in a G-CNN are equivariant, the symmetry is propagated through the network and can be exploited by group convolutional weight sharing in each layer. 1Technically, convolutions are exactly translation equivariant when feature maps are defined on infinite planes with zero values outside borders. In practice, CNNs are only locally translation equivariant. 2.1 IMPLEMENTATION OF GROUP CONVOLUTIONS Equation 3 gives a mathematical definition of group convolution, but not an algorithm. To obtain a practical implementation, we use the fact that the groups of interest can be split2 into a group of translations (Z2), and a group H of transformations that leaves the origin fixed (e.g. rotations and/or reflections about the origin3). The G-Conv can then be implemented as a two step computation: filter transformation (H) and planar convolution (Z2). G-CNNs generally use two kinds of group convolutions: one in which the input is a planar image and the output is a feature map on the group G (for the first layer), and one in which the input and output are both feature maps on G. We can provide a unified explanation of the filter transformation step by introducing Hin and Hout. In the first-layer G-Conv, Hin = {e} is the trivial group containing only the identity transformation, while Hout = H is typically a group of discrete rotations (4 or 6). For the second-layer G-Conv, we have Hin = Hout = H . The input for the filter transformation step is a learnable filterbank Ψ of shape C × K · |Hin| × S × S, where C,K, S denote the number of output channels, input channels, and spatial length, respectively. The output is a filterbank of shape C · |Hout|×K · |Hin|×S×S, obtained by applying each h ∈ Hout to each of the C filters. In practice, this is implemented as an indexing operation Ψ[I] using a precomputed static index array I . The second step of the group convolution is a planar convolution of the input f with the transformed filterbank Ψ[I]. In what follows, we will show how to compute a planar convolution on the hexagonal lattice (Section 3), and how to compute the indexing array I used in the filter transformation step of G-HexaConv (Section 4). 3 HEXAGONAL COORDINATE SYSTEMS The hexagonal grid can be indexed using several coordinate systems (see Figure 2). These systems vary with respect to three important characteristics: memory efficiency, the possibility of reusing square convolution kernels for hexagonal convolution, and the ease of applying rotations and flips. As shown in Figure 3, some coordinate systems cannot be used to represent a rectangular image in a rectangular memory region. In order to store a rectangular image using such a coordinate system, extra memory is required for padding. Moreover, in some coordinate systems, it is not possible to use standard planar convolution routines to perform hexagonal convolutions. Specifically, in the Offset coordinate system, the shape of a hexagonal filter as represented in a rectangular memory array changes depending on whether it is centered on an even or odd row (see Figure 4). Because no coordinate system is ideal in every way, we will define four useful ones, and discuss their merits. Figures 2, 3 and 4 should suffice to convey the big picture, so the reader may skip to Section 4 on a first reading. 2To be precise, the group G is a semidirect product: G = Z2 oH . 3The group G generated by compositions of translations and rotations around the origin, contains rotations around any center. 3.1 AXIAL Perhaps the most natural coordinate system for the hexagonal lattice is based on the lattice structure itself. The lattice contains all points in the plane that can be obtained as an integer linear combination of two basis vectors e1 and e2, which are separated by an angle of 60 degrees. The Axial coordinate system simply represents the pixel centered at ue1 + ve2 by coordinates (u, v) (see Figure 2a). Both the square and hexagonal lattice are isomorphic to Z2. The planar convolution only relies on the additive structure of Z2, and so it is possible to simply apply a convolution kernel developed for rectangular images to a hexagonal image stored in a rectangular buffer using axial coordinates. As shown in Figure 3a, a rectangular area in the hexagonal lattice corresponds to a parallelogram in memory. Thus the axial system requires additional space for padding in order to store an image, which is its main disadvantage. When representing an axial filter in memory, the corners of the array need to be zeroed out by a mask (see Figure 4a). 3.2 CUBE The cube coordinate system represents a 2D hexagonal grid inside of a 3D cube (see Figure 5). Although representing grids in three dimensional structures is very memory-inefficient, the cube system is useful because rotations and reflections can be expressed in a very simple way. Furthermore, the conversion between the axial and cube systems is straightforward: x = v, y = −(u+ v), z = u. Hence, we only use the Cube system to apply transformations to coordinates, and use other systems for storing images. A counter-clockwise rotation by 60 degrees can be performed by the following formula: r · (x, y, z) = (−z,−x,−y). (5) Similarly, a mirroring operation over the vertical axis through the origin is computed with: m · (x, y, z) = (x, z, y). (6) 3.3 DOUBLE WIDTH The double width system is based on two orthogonal axes. Stepping to the right by 1 unit in the hexagonal lattice, the u-coordinate is incremented by 2 (see Figure 2c). Furthermore, odd rows are offset by one unit in the u direction. Together, this leads to a checkerboard pattern (Figure 3b) that doubles the image and filter size by a factor of two. The good thing about this scheme is that a hexagonal convolution can be implemented as a rectangular convolution applied to checkerboard arrays, with checkerboard filter masking. This works because the checkerboard sparsity pattern is preserved by the square convolution kernel: if the input and filter have this pattern, the output will too. As such, HexaConv is very easy to implement using the double width coordinate system. It is however very inefficient, so we recommend it only for use in preliminary experiments. 3.4 OFFSET Like the double width system, the offset coordinate system uses two orthogonal axes. However, in the offset system, a one-unit horizontal step in the hexagonal lattice corresponds to a one-unit increment in the u-coordinate. Hence, rectangular images can be stored efficiently and without padding in the offset coordinate system (see Figure 3c). The downside to offset coordinates is that hexagonal convolutions cannot be expressed as a single 2D convolution (see Figure 4c and 4d), because the shape of the filters is different for even and odd rows. Given access to a convolution kernel that supports strides, it should be possible to implement hexagonal convolution in the offset system using two convolution calls, one for the even and one for the odd row. Ideally, these two calls would write to the same output buffer (using a strided write), but unfortunately most convolution implementations do not support this feature. Hence, the result of the two convolution calls has to be copied to a single buffer using strided indexing. We note that a custom HexaConv kernel for the offset system would remove these difficulties. Were such a kernel developed, the offset system would be ideal because of its memory efficiency. 4 IMPLEMENTATION The group convolution can be factored into a filter transformation step and a hexagonal convolution step, as was mentioned in Section 2 and visualized in Figure 1. In our implementation, we chose to use the Axial coordinate system for feature maps and filters, so that the hexagonal convolution can be performed by a rectangular convolution kernel. In this section, we explain the filter transformation and masking in general, for more details see Appendix A. The general procedure described in Section 2.1 also applies to hexagonal group convolution (p6 and p6m). In summary, a filter transformation is applied to a learnable filter bank Ψ resulting in a filter bank Ψ′ than can be used to compute the group convolution using (multiple) planar convolutions (see the top of Figure 1 for a visual portrayal of this transformation). In practice this transformation Table 1: CIFAR-10 performance comparison Model Error Params Z2 11.50 ±0.30 338000 Z2 Axial 11.25 ±0.24 337000 p4 10.08 ±0.23 337000 p6 Axial 9.98 ±0.32 336000 p4m 8.96 ±0.46 337000 p6m Axial 8.64 ±0.34 337000 is implemented as an indexing operation Ψ[I], where I is a constant indexing array based on the structure of the desired group. Hence, after computing Ψ[I], the convolution routines can be applied as usual. Although convolution with filters and feature maps laid out according to the Axial coordinate system is equivalent to convolution on the hexagonal lattice, both the filters and the feature maps contain padding (See Figure 3 and 4), since the planar convolution routines operate on rectangular arrays. As a consequence, non-zero output may be written to the padding area of both the feature maps or the filters. To address this, we explicitly perform a masking operation on feature maps and filters after every convolution operation and parameter update, to ensure that values in the padding area stay strictly equal to zero. 5 EXPERIMENTS We perform experiments on the CIFAR-10 and the AID datasets. Specifically, we compare the accuracy of our G-HexaConvs (p6- and p6m-networks) to that of existing G-networks (p4- and p4m-networks) (Cohen & Welling, 2016) and standard networks (Z2). Moreover, the effect of utilizing an hexagonal lattice is evaluated in experiments using the HexaConv network (hexagonal lattice without group convolutions, or Z2 Axial). Our experiments show that the use of an hexagonal lattice improves upon the conventional square lattice, both when using normal or p6-convolutions. 5.1 CIFAR-10 CIFAR-10 is a standard benchmark that consists of 60000 images of 32 by 32 pixels and 10 different target classes. We compare the performance of several ResNet (He et al., 2016) based G-networks. Specifically, every G-ResNet consists of 3 stages, with 4 blocks per stage. Each block has two 3 by 3 convolutions, and a skip connection from the input to the output. Spatial pooling is applied to the penultimate layer, which leaves the orientation channels intact and allows the network to maintain orientation equivariance. Moreover, the number of feature maps is scaled such that all G-networks are made up of a similar number of parameters. For hexagonal networks, the input images are resampled to the hexagonal lattice using bilinear interpolation (see Figure 6). Since the classification performance of a HexaConv network does not degrade, the quality of these interpolated images suffices. The CIFAR-10 results are presented in Table 1, obtained by taking the average of 10 experiments with different random weight initializations. We note that the the HexaConv CNN (Z2 Axial) outperforms the standard CNN (Z2). Moreover, we find that p4- and p4m-ResNets are outperformed by our p6- and p6m-ResNets, respectively. We also find a general pattern: using groups with an increasing number of symmetries consistently improves performance. 5.2 AID The Aerial Image Dataset (AID) (Xia et al., 2017) is a dataset consisting of 10000 satellite images of 400 by 400 pixels and 30 target classes. The labels of aerial images are typically invariant to rotations, i.e., one does not expect labels to change when an aerial image is rotated. For each experiment, we split the data set into random 80% train/20% test sets. This contrasts the 50% train/test split by Xia et al. (2017). Since the test sets are smaller, experiments are performed multiple times with randomly selected splits to obtain better estimates. All models are evaluated on identical randomly selected splits, to ensure that the comparison is fair. As a baseline, we take the best performing model from Xia et al. (2017), which uses VGG16 as a feature extractor and an SVM for classification. Because the baseline was trained on 50%/50% splits, we re-evaluate the model trained on the same 80%/20% splits. We again test several G-networks with ResNet architectures. The first convolution layer has stride two, and the ResNets take resized 64 by 64 images as input. The networks are widened to account for the increase in the number of classes. Similar to the CIFAR-10 experiments, the networks still consist of 3 stages, but have two blocks per stage. In contrast with the CIFAR-10 networks, pooling is applied to the spatial dimensions and the orientation channels in the penultimate layer. This allows the network to become orientation invariant. The results for the AID experiment are presented in Table 2. The error decreases from an error of 19.3% on a Z2-ResNet, to an impressive error of 8.6% on a p6-ResNet. We found that adding mirror symmetry did not meaningfully affect performance (p4m 10.2% and p6m 9.3% error). This suggests that mirror symmetry is not an effective inductive bias for AID. It is worth noting that the baseline model has been pretrained on ImageNet, and all our models were trained from random initialization. These results demonstrate that group convolutions can improve performance drastically, especially when symmetries in the dataset match the selected group. 6 RELATED WORK The effect of changing the sampling lattice for image processing from square to hexagonal has been studied over many decades. The isoperimetry of a hexagon, and uniform connectivity of the lattice, make the hexagonal lattice a more natural method to tile the plane (Middleton & Sivaswamy, 2006). In certain applications hexagonal lattices have shown to be superior to square lattices (Petersen & Middleton, 1962; Hartman & Tanimoto, 1984). Transformational equivariant representations have received significant research interest over the years. Methods for invariant representations in hand-crafted features include pose normalization (Lowe, 1999; Dalal & Triggs, 2005) and projections from the plane to the sphere (Kondor, 2007). Although approximate transformational invariance can be achieved through data augmentation (Van Dyk & Meng, 2001), in general much more complex neural networks are required to learn the invariances that are known to the designer a-priori. As such, in recent years, various approaches for obtaining equivariant or invariant CNNs — with respect to specific transformations of the data —were introduced. A number of recent works propose to either rotate the filters or the feature maps followed and subsequent channel permutations to obtain equivariant (or invariant) CNNs (Cohen & Welling, 2016; Dieleman et al., 2016; Zhou et al., 2017; Li et al., 2017). Cohen & Welling (2017) describe a general framework of equivariant networks with respect to discrete and continuous groups, based on steerable filters, that includes group convolutions as a special case. Harmonic Networks (Worrall et al., 2016) apply the theory of Steerable CNNs to obtain a CNN that is approximately equivariant with respect to continuous rotations. Deep Symmetry Networks (Gens & Domingos, 2014) are approximately equivariant CNN that leverage sparse high dimensional feature maps to handle high dimensional symmetry groups. Marcos et al. (2016) obtain rotational equivariance by rotating filters followed by a pooling operation that maintains both the angle of the maximum magnitude and the magnitude itself, resulting in a vector field feature map. Ravanbakhsh et al. (2017) study equivariance in neural networks through symmetries in the network architecture, and propose two parameter-sharing schemes to achieve equivariance with respect to discrete group actions. Instead of enforcing invariance at the architecture level, Spatial Transformer Networks (Jaderberg et al., 2015) explicitly spatially transform feature maps dependent on the feature map itself resulting in invariant models. Similarly, Polar Transformer Networks (Esteves et al., 2017) transform the feature maps to a log-polar representation conditional on the feature maps such that subsequent convolution correspond to group (SIM(2)) convolutions. Henriques & Vedaldi (2016) obtain invariant CNN with respect to spatial transformations by warping the input and filters by a predefined warp. Due to the dependence on global transformations of the input, these methods are limited to global symmetries of the data. 7 CONCLUSION We have introduced G-HexaConv, an extension of group convolutions for hexagonal pixelations. Hexagonal grids allow 6-fold rotations without the need for interpolation. We review different coordinate systems for the hexagonal grid, and provide a description to implement hexagonal (group) convolutions. To demonstrate the effectiveness of our method, we test on an aerial scene dataset where the true label function is expected to be invariant to rotation transformations. The results reveal that the reduced anisotropy of hexagonal filters compared to square filters, improves performance. Furthermore, hexagonal group convolutions can utilize symmetry equivariance and invariance, which allows them to outperform other methods considerably.
1. What are the strengths and weaknesses of the paper's approach to efficient implementation of G-convolutions for 6-fold rotations? 2. How does the method fare in terms of rotation equivariance, especially when applied to datasets like CIFAR-10 and AID? 3. Are there any limitations to the method's ability to extend to other groups beyond 2D rotations? 4. Does the paper adequately address potential concerns regarding the limitation of the proposed method to rotations? 5. Is there room for improvement in evaluating the effectiveness and efficiency of the proposed approach?
Review
Review The authors took my comments nicely into account in their revision, and their answers are convincing. I increase my rating from 5 to 7. The authors could also integrate their discussion about their results on CIFAR in the paper, I think it would help readers understand better the advantage of the contribution. ---- This paper is based on the theory of group equivariant CNNs (G-CNNs), proposed by Cohen and Welling ICML'16. Regular convolutions are translation-equivariant, meaning that if an image is translated, its convolution by any filter is also translated. They are however not rotation-invariant for example. G-CNN introduces G-convolutions, which are equivariant to a given transformation group G. This paper proposes an efficient implementation of G-convolutions for 6-fold rotations (rotations of multiple of 60 degrees), using a hexagonal lattice. The approach is evaluated on CIFAR-10 and AID, a dataset of aerial scene classification. The approach outperforms G-convolutions implemented on a squared lattice, which allows only 4-fold rotations on AID by a short margin. On CIFAR-10, the difference does not seem significative (according to Tables 1 and 2). I guess this can be explained by the fact that rotation equivariance makes sense for aerial images, where the scene is mostly fronto-parallel, but less for CIFAR (especially in the upper layers), which exhibits 3D objects. I like the general approach of explicitly putting desired equivariance in the convolutional networks. Using a hexagonal lattice is elegant, even if it is not new in computer vision (as written in the paper). However, as the transformation group is limited to rotations, this is interesting in practice mostly for fronto-parallel scenes, as the experiences seem to show. It is not clear how the method can be extended to other groups than 2D rotations. Moreover, I feel like the paper sometimes tries to mask the fact that the proposed method is limited to rotations. It is admittedly clearly stated in the abstract and introduction, but much less in the rest of the paper. The second paragraph of Section 5.1 is difficult to keep in a paper. It says that "From a qualitative inspection of these hexagonal interpolations we conclude that no information is lost during the sampling procedure." "No information is lost" is a strong statement from a qualitative inspection, especially of a hexagonal image. This statement should probably be removed. One way to evaluate the information lost could be to iterate interpolation between hexagonal and squared lattices to see if the image starts degrading at some point.
ICLR
Title HexaConv Abstract The effectiveness of convolutional neural networks stems in large part from their ability to exploit the translation invariance that is inherent in many learning problems. Recently, it was shown that CNNs can exploit other sources of invariance, such as rotation invariance, by using group convolutions instead of planar convolutions. However, for reasons of performance and ease of implementation, it has been necessary to limit the group convolution to transformations that can be applied to the filters without interpolation. Thus, for images with square pixels, only integer translations, rotations by multiples of 90 degrees, and reflections are admissible. Whereas the square tiling provides a 4-fold rotational symmetry, a hexagonal tiling of the plane has a 6-fold rotational symmetry. In this paper we show how one can efficiently implement planar convolution and group convolution over hexagonal lattices, by re-using existing highly optimized convolution routines. We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget. Furthermore, we find that the increased degree of symmetry of the hexagonal grid increases the effectiveness of group convolutions, by allowing for more parameter sharing. We show that our method significantly outperforms conventional CNNs on the AID aerial scene classification dataset, even outperforming ImageNet pretrained models. 1 INTRODUCTION For sensory perception tasks, neural networks have mostly replaced handcrafted features. Instead of defining features by hand using domain knowledge, it is now possible to learn them, resulting in improved accuracy and saving a considerable amount of work. However, successful generalization is still critically dependent on the inductive bias encoded in the network architecture, whether this bias is understood by the network architect or not. The canonical example of a successful network architecture is the Convolutional Neural Network (CNN, ConvNet). Through convolutional weight sharing, these networks exploit the fact that a given visual pattern may appear in different locations in the image with approximately equal likelihood. Furthermore, this translation symmetry is preserved throughout the network, because a translation of the input image leads to a translation of the feature maps at each layer: convolution is translation equivariant. Very often, the true label function (the mapping from image to label that we wish to learn) is invariant to more transformations than just translations. Rotations are an obvious example, but standard translational convolutions cannot exploit this symmetry, because they are not rotation equivariant. As it turns out, a convolution operation can be defined for almost any group of transformation — not just translations. By simply replacing convolutions with group convolutions (wherein filters are not ∗Equal contribution just shifted but transformed by a larger group; see Figure 1), convolutional networks can be made equivariant to and exploit richer groups of symmetries (Cohen & Welling, 2016). Furthermore, this technique was shown to be more effective than data augmentation. Although the general theory of such group equivariant convolutional networks (G-CNNs) is applicable to any reasonably well-behaved group of symmetries (including at least all finite, infinite discrete, and continuous compact groups), the group convolution is easiest to implement when all the transformations in the group of interest are also symmetries of the grid of pixels. For this reason, G-CNNs were initially implemented only for the discrete groups p4 and p4m which include integer translations, rotations by multiples of 90 degrees, and, in the case of p4m, mirror reflections — the symmetries of a square lattice. The main hurdle that stands in the way of a practical implementation of group convolution for a continuous group, such as the roto-translation group SE(2), is the fact that it requires interpolation in order to rotate the filters. Although it is possible to use bilinear interpolation in a neural network (Jaderberg et al., 2015), it is somewhat more difficult to implement, computationally expensive, and most importantly, may lead to numerical approximation errors that can accumulate with network depth. This has led us to consider the hexagonal grid, wherein it is possible to rotate a filter by any multiple of 60 degrees, without interpolation. This allows use to define group convolutions for the groups p6 and p6m, which contain integer translations, rotations with multiples of 60 degrees, and mirroring for p6m. To our surprise, we found that even for translational convolution, a hexagonal pixelation appears to have significant advantages over a square pixelation. Specifically, hexagonal pixelation is more efficient for signals that are band limited to a circular area in the Fourier plane (Petersen & Middleton, 1962), and hexagonal pixelation exhibits improved isotropic properties such as twelve-fold symmetry and six-connectivity, compared to eight-fold symmetry and four-connectivity of square pixels (Mersereau, 1979; Condat & Van De Ville, 2007). Furthermore, we found that using small, approximately round hexagonal filters with 7 parameters works better than square 3× 3 filters when the number of parameters is kept the same. As hypothesized, group convolution is also more effective on a hexagonal lattice, due to the increase in weight sharing afforded by the higher degree of rotational symmetry. Indeed, the general pattern we find is that the larger the group of symmetries being exploited, the better the accuracy: p6-convolution outperforms p4-convolution, which in turn outperforms ordinary translational convolution. In order to use hexagonal pixelations in convolutional networks, a number of challenges must be addressed. Firstly, images sampled on a square lattice need to be resampled on a hexagonal lattice. This is easily achieved using bilinear interpolation. Secondly, the hexagonal images must be stored in a way that is both memory efficient and allows for a fast implementation of hexagonal convolution. To this end, we review various indexing schemes for the hexagonal lattice, and show that for some of them, we can leverage highly optimized square convolution routines to perform the hexagonal convolution. Finally, we show how to efficiently implement the filter transformation step of the group convolution on a hexagonal lattice. We evaluate our method on the CIFAR-10 benchmark and on the Aerial Image Dataset (AID) (Xia et al., 2017). Aerial images are one of the many image types where the label function is invariant to rotations: One expects that rotating an aerial image does not change the label. In situations where the number of examples is limited, data efficient learning is important. Our experiments demonstrate that group convolutions systematically improve performance. The method outperforms the baseline model pretrained on ImageNet, as well as comparable architectures with the same number of parameters. Source code of G-HexaConvs is available on Github: https://github.com/ehoogeboom/hexaconv. The remainder of this paper is organized as follows: In Section 2 we summarize the theory of group equivariant networks. Section 3 provides an overview of different coordinate systems on the hexagonal grid, Section 4 discusses the implementation details of the hexagonal G-convolutions, in Section 5 we introduce the experiments and present results, Section 6 gives an overview of other related work after which we discuss our findings and conclude. 2 GROUP EQUIVARIANT CNNS In this section we review the theory of G-CNNs, as presented by Cohen & Welling (2016). To begin, recall that normal convolutions are translation equivariant1. More formally, let Lt denote the operator that translates a feature map f : Z2 → RK by t ∈ Z2, and let ψ denote a filter. Translation equivariance is then expressed as: [[Ltf ] ? ψ](x) = [Lt[f ? ψ]](x). (1) In words: translation followed by convolution equals convolution followed by a translation. If instead we apply a rotation r, we obtain: [[Lrf ] ? ψ](x) = Lr[f ? [Lr−1ψ]](x). (2) That is, the convolution of a rotated image Lrf by a filter ψ equals the rotation of a convolved image f by a inversely rotated filter Lr−1ψ. There is no way to express [Lrf ] ? ψ in terms of f ? ψ, so convolution is not rotation equivariant. The convolution is computed by shifting a filter over an image. By changing the translation to a transformation from a larger group G, a G-convolution is obtained. Mathematically, the GConvolution for a group G and input space X (e.g. the square or hexagonal lattice) is defined as: [f ?g ψ](g) = ∑ h∈X ∑ k fk(h)ψk(g −1h), (3) where k denotes the input channel, fk and ψk are signals defined on X , and g is a transformation in G. The standard (translational) convolution operation is a special case of the G-convolution for X = G = Z2, the translation group. In a typical G-CNN, the input is an image, so we have X = Z2 for the first layer, while G could be a larger group such as a group of rotations and translations. Because the feature map f ?g ψ is indexed by g ∈ G, in higher layers the feature maps and filters are functions on G, i.e. we have X = G. One can show that the G-convolution is equivariant to transformations u ∈ G: [[Luf ] ?g ψ](g) = [Lu[f ?g ψ]](g). (4) Because all layers in a G-CNN are equivariant, the symmetry is propagated through the network and can be exploited by group convolutional weight sharing in each layer. 1Technically, convolutions are exactly translation equivariant when feature maps are defined on infinite planes with zero values outside borders. In practice, CNNs are only locally translation equivariant. 2.1 IMPLEMENTATION OF GROUP CONVOLUTIONS Equation 3 gives a mathematical definition of group convolution, but not an algorithm. To obtain a practical implementation, we use the fact that the groups of interest can be split2 into a group of translations (Z2), and a group H of transformations that leaves the origin fixed (e.g. rotations and/or reflections about the origin3). The G-Conv can then be implemented as a two step computation: filter transformation (H) and planar convolution (Z2). G-CNNs generally use two kinds of group convolutions: one in which the input is a planar image and the output is a feature map on the group G (for the first layer), and one in which the input and output are both feature maps on G. We can provide a unified explanation of the filter transformation step by introducing Hin and Hout. In the first-layer G-Conv, Hin = {e} is the trivial group containing only the identity transformation, while Hout = H is typically a group of discrete rotations (4 or 6). For the second-layer G-Conv, we have Hin = Hout = H . The input for the filter transformation step is a learnable filterbank Ψ of shape C × K · |Hin| × S × S, where C,K, S denote the number of output channels, input channels, and spatial length, respectively. The output is a filterbank of shape C · |Hout|×K · |Hin|×S×S, obtained by applying each h ∈ Hout to each of the C filters. In practice, this is implemented as an indexing operation Ψ[I] using a precomputed static index array I . The second step of the group convolution is a planar convolution of the input f with the transformed filterbank Ψ[I]. In what follows, we will show how to compute a planar convolution on the hexagonal lattice (Section 3), and how to compute the indexing array I used in the filter transformation step of G-HexaConv (Section 4). 3 HEXAGONAL COORDINATE SYSTEMS The hexagonal grid can be indexed using several coordinate systems (see Figure 2). These systems vary with respect to three important characteristics: memory efficiency, the possibility of reusing square convolution kernels for hexagonal convolution, and the ease of applying rotations and flips. As shown in Figure 3, some coordinate systems cannot be used to represent a rectangular image in a rectangular memory region. In order to store a rectangular image using such a coordinate system, extra memory is required for padding. Moreover, in some coordinate systems, it is not possible to use standard planar convolution routines to perform hexagonal convolutions. Specifically, in the Offset coordinate system, the shape of a hexagonal filter as represented in a rectangular memory array changes depending on whether it is centered on an even or odd row (see Figure 4). Because no coordinate system is ideal in every way, we will define four useful ones, and discuss their merits. Figures 2, 3 and 4 should suffice to convey the big picture, so the reader may skip to Section 4 on a first reading. 2To be precise, the group G is a semidirect product: G = Z2 oH . 3The group G generated by compositions of translations and rotations around the origin, contains rotations around any center. 3.1 AXIAL Perhaps the most natural coordinate system for the hexagonal lattice is based on the lattice structure itself. The lattice contains all points in the plane that can be obtained as an integer linear combination of two basis vectors e1 and e2, which are separated by an angle of 60 degrees. The Axial coordinate system simply represents the pixel centered at ue1 + ve2 by coordinates (u, v) (see Figure 2a). Both the square and hexagonal lattice are isomorphic to Z2. The planar convolution only relies on the additive structure of Z2, and so it is possible to simply apply a convolution kernel developed for rectangular images to a hexagonal image stored in a rectangular buffer using axial coordinates. As shown in Figure 3a, a rectangular area in the hexagonal lattice corresponds to a parallelogram in memory. Thus the axial system requires additional space for padding in order to store an image, which is its main disadvantage. When representing an axial filter in memory, the corners of the array need to be zeroed out by a mask (see Figure 4a). 3.2 CUBE The cube coordinate system represents a 2D hexagonal grid inside of a 3D cube (see Figure 5). Although representing grids in three dimensional structures is very memory-inefficient, the cube system is useful because rotations and reflections can be expressed in a very simple way. Furthermore, the conversion between the axial and cube systems is straightforward: x = v, y = −(u+ v), z = u. Hence, we only use the Cube system to apply transformations to coordinates, and use other systems for storing images. A counter-clockwise rotation by 60 degrees can be performed by the following formula: r · (x, y, z) = (−z,−x,−y). (5) Similarly, a mirroring operation over the vertical axis through the origin is computed with: m · (x, y, z) = (x, z, y). (6) 3.3 DOUBLE WIDTH The double width system is based on two orthogonal axes. Stepping to the right by 1 unit in the hexagonal lattice, the u-coordinate is incremented by 2 (see Figure 2c). Furthermore, odd rows are offset by one unit in the u direction. Together, this leads to a checkerboard pattern (Figure 3b) that doubles the image and filter size by a factor of two. The good thing about this scheme is that a hexagonal convolution can be implemented as a rectangular convolution applied to checkerboard arrays, with checkerboard filter masking. This works because the checkerboard sparsity pattern is preserved by the square convolution kernel: if the input and filter have this pattern, the output will too. As such, HexaConv is very easy to implement using the double width coordinate system. It is however very inefficient, so we recommend it only for use in preliminary experiments. 3.4 OFFSET Like the double width system, the offset coordinate system uses two orthogonal axes. However, in the offset system, a one-unit horizontal step in the hexagonal lattice corresponds to a one-unit increment in the u-coordinate. Hence, rectangular images can be stored efficiently and without padding in the offset coordinate system (see Figure 3c). The downside to offset coordinates is that hexagonal convolutions cannot be expressed as a single 2D convolution (see Figure 4c and 4d), because the shape of the filters is different for even and odd rows. Given access to a convolution kernel that supports strides, it should be possible to implement hexagonal convolution in the offset system using two convolution calls, one for the even and one for the odd row. Ideally, these two calls would write to the same output buffer (using a strided write), but unfortunately most convolution implementations do not support this feature. Hence, the result of the two convolution calls has to be copied to a single buffer using strided indexing. We note that a custom HexaConv kernel for the offset system would remove these difficulties. Were such a kernel developed, the offset system would be ideal because of its memory efficiency. 4 IMPLEMENTATION The group convolution can be factored into a filter transformation step and a hexagonal convolution step, as was mentioned in Section 2 and visualized in Figure 1. In our implementation, we chose to use the Axial coordinate system for feature maps and filters, so that the hexagonal convolution can be performed by a rectangular convolution kernel. In this section, we explain the filter transformation and masking in general, for more details see Appendix A. The general procedure described in Section 2.1 also applies to hexagonal group convolution (p6 and p6m). In summary, a filter transformation is applied to a learnable filter bank Ψ resulting in a filter bank Ψ′ than can be used to compute the group convolution using (multiple) planar convolutions (see the top of Figure 1 for a visual portrayal of this transformation). In practice this transformation Table 1: CIFAR-10 performance comparison Model Error Params Z2 11.50 ±0.30 338000 Z2 Axial 11.25 ±0.24 337000 p4 10.08 ±0.23 337000 p6 Axial 9.98 ±0.32 336000 p4m 8.96 ±0.46 337000 p6m Axial 8.64 ±0.34 337000 is implemented as an indexing operation Ψ[I], where I is a constant indexing array based on the structure of the desired group. Hence, after computing Ψ[I], the convolution routines can be applied as usual. Although convolution with filters and feature maps laid out according to the Axial coordinate system is equivalent to convolution on the hexagonal lattice, both the filters and the feature maps contain padding (See Figure 3 and 4), since the planar convolution routines operate on rectangular arrays. As a consequence, non-zero output may be written to the padding area of both the feature maps or the filters. To address this, we explicitly perform a masking operation on feature maps and filters after every convolution operation and parameter update, to ensure that values in the padding area stay strictly equal to zero. 5 EXPERIMENTS We perform experiments on the CIFAR-10 and the AID datasets. Specifically, we compare the accuracy of our G-HexaConvs (p6- and p6m-networks) to that of existing G-networks (p4- and p4m-networks) (Cohen & Welling, 2016) and standard networks (Z2). Moreover, the effect of utilizing an hexagonal lattice is evaluated in experiments using the HexaConv network (hexagonal lattice without group convolutions, or Z2 Axial). Our experiments show that the use of an hexagonal lattice improves upon the conventional square lattice, both when using normal or p6-convolutions. 5.1 CIFAR-10 CIFAR-10 is a standard benchmark that consists of 60000 images of 32 by 32 pixels and 10 different target classes. We compare the performance of several ResNet (He et al., 2016) based G-networks. Specifically, every G-ResNet consists of 3 stages, with 4 blocks per stage. Each block has two 3 by 3 convolutions, and a skip connection from the input to the output. Spatial pooling is applied to the penultimate layer, which leaves the orientation channels intact and allows the network to maintain orientation equivariance. Moreover, the number of feature maps is scaled such that all G-networks are made up of a similar number of parameters. For hexagonal networks, the input images are resampled to the hexagonal lattice using bilinear interpolation (see Figure 6). Since the classification performance of a HexaConv network does not degrade, the quality of these interpolated images suffices. The CIFAR-10 results are presented in Table 1, obtained by taking the average of 10 experiments with different random weight initializations. We note that the the HexaConv CNN (Z2 Axial) outperforms the standard CNN (Z2). Moreover, we find that p4- and p4m-ResNets are outperformed by our p6- and p6m-ResNets, respectively. We also find a general pattern: using groups with an increasing number of symmetries consistently improves performance. 5.2 AID The Aerial Image Dataset (AID) (Xia et al., 2017) is a dataset consisting of 10000 satellite images of 400 by 400 pixels and 30 target classes. The labels of aerial images are typically invariant to rotations, i.e., one does not expect labels to change when an aerial image is rotated. For each experiment, we split the data set into random 80% train/20% test sets. This contrasts the 50% train/test split by Xia et al. (2017). Since the test sets are smaller, experiments are performed multiple times with randomly selected splits to obtain better estimates. All models are evaluated on identical randomly selected splits, to ensure that the comparison is fair. As a baseline, we take the best performing model from Xia et al. (2017), which uses VGG16 as a feature extractor and an SVM for classification. Because the baseline was trained on 50%/50% splits, we re-evaluate the model trained on the same 80%/20% splits. We again test several G-networks with ResNet architectures. The first convolution layer has stride two, and the ResNets take resized 64 by 64 images as input. The networks are widened to account for the increase in the number of classes. Similar to the CIFAR-10 experiments, the networks still consist of 3 stages, but have two blocks per stage. In contrast with the CIFAR-10 networks, pooling is applied to the spatial dimensions and the orientation channels in the penultimate layer. This allows the network to become orientation invariant. The results for the AID experiment are presented in Table 2. The error decreases from an error of 19.3% on a Z2-ResNet, to an impressive error of 8.6% on a p6-ResNet. We found that adding mirror symmetry did not meaningfully affect performance (p4m 10.2% and p6m 9.3% error). This suggests that mirror symmetry is not an effective inductive bias for AID. It is worth noting that the baseline model has been pretrained on ImageNet, and all our models were trained from random initialization. These results demonstrate that group convolutions can improve performance drastically, especially when symmetries in the dataset match the selected group. 6 RELATED WORK The effect of changing the sampling lattice for image processing from square to hexagonal has been studied over many decades. The isoperimetry of a hexagon, and uniform connectivity of the lattice, make the hexagonal lattice a more natural method to tile the plane (Middleton & Sivaswamy, 2006). In certain applications hexagonal lattices have shown to be superior to square lattices (Petersen & Middleton, 1962; Hartman & Tanimoto, 1984). Transformational equivariant representations have received significant research interest over the years. Methods for invariant representations in hand-crafted features include pose normalization (Lowe, 1999; Dalal & Triggs, 2005) and projections from the plane to the sphere (Kondor, 2007). Although approximate transformational invariance can be achieved through data augmentation (Van Dyk & Meng, 2001), in general much more complex neural networks are required to learn the invariances that are known to the designer a-priori. As such, in recent years, various approaches for obtaining equivariant or invariant CNNs — with respect to specific transformations of the data —were introduced. A number of recent works propose to either rotate the filters or the feature maps followed and subsequent channel permutations to obtain equivariant (or invariant) CNNs (Cohen & Welling, 2016; Dieleman et al., 2016; Zhou et al., 2017; Li et al., 2017). Cohen & Welling (2017) describe a general framework of equivariant networks with respect to discrete and continuous groups, based on steerable filters, that includes group convolutions as a special case. Harmonic Networks (Worrall et al., 2016) apply the theory of Steerable CNNs to obtain a CNN that is approximately equivariant with respect to continuous rotations. Deep Symmetry Networks (Gens & Domingos, 2014) are approximately equivariant CNN that leverage sparse high dimensional feature maps to handle high dimensional symmetry groups. Marcos et al. (2016) obtain rotational equivariance by rotating filters followed by a pooling operation that maintains both the angle of the maximum magnitude and the magnitude itself, resulting in a vector field feature map. Ravanbakhsh et al. (2017) study equivariance in neural networks through symmetries in the network architecture, and propose two parameter-sharing schemes to achieve equivariance with respect to discrete group actions. Instead of enforcing invariance at the architecture level, Spatial Transformer Networks (Jaderberg et al., 2015) explicitly spatially transform feature maps dependent on the feature map itself resulting in invariant models. Similarly, Polar Transformer Networks (Esteves et al., 2017) transform the feature maps to a log-polar representation conditional on the feature maps such that subsequent convolution correspond to group (SIM(2)) convolutions. Henriques & Vedaldi (2016) obtain invariant CNN with respect to spatial transformations by warping the input and filters by a predefined warp. Due to the dependence on global transformations of the input, these methods are limited to global symmetries of the data. 7 CONCLUSION We have introduced G-HexaConv, an extension of group convolutions for hexagonal pixelations. Hexagonal grids allow 6-fold rotations without the need for interpolation. We review different coordinate systems for the hexagonal grid, and provide a description to implement hexagonal (group) convolutions. To demonstrate the effectiveness of our method, we test on an aerial scene dataset where the true label function is expected to be invariant to rotation transformations. The results reveal that the reduced anisotropy of hexagonal filters compared to square filters, improves performance. Furthermore, hexagonal group convolutions can utilize symmetry equivariance and invariance, which allows them to outperform other methods considerably.
1. What is the main contribution of the paper, and how does it extend previous work on planar and group convolutions? 2. How does the proposed framework, G-HexaConv, improve upon traditional CNNs in terms of invariance and performance? 3. What are some potential limitations or trade-offs associated with using hexagonal lattices instead of square lattices in CNNs? 4. Can you provide more details about the efficiency of the proposed method's implementation, particularly regarding memory complexity and computational time? 5. How do you envision the potential impact of your work on future research in transformational equivariant representations?
Review
Review The paper proposes G-HexaConv, a framework extending planar and group convolutions for hexagonal lattices. Original Group-CNNs (G-CNNs) implemented on squared lattices were shown to be invariant to translations and rotations by multiples of 90 degrees. With the hexagonal lattices defined in this paper, this invariance can be extended to rotations by multiples of 60 degrees. This shows small improvements in the CIFAR-10 performances, but larger margins in an Aerial Image Dataset. Defining hexagonal pixel configurations in convolutional networks requires both resampling input images (under squared lattices) and reformulate image indexing. All these steps are very well explained in the paper, combining mathematical rigor and clarifications. All this makes me believe the paper is worth being accepted at ICLR conference. Some issues that would require further discussion/clarification: - G-HexaConv critical points are memory and computation complexity. Authors claim to have an efficient implementation but the paper lacks a proper quantitative evaluation. Memory complexity and computational time comparison between classic CNNs and G-HexaConv should be provided. - I encourage the authors to open the source code for reproducibility and comparison with future transformational equivariant representations -Also, in Fig.1, I would recommend to clarify that image ‘f’ corresponds to a 2D view of a hexagonal image pixelation. My first impression was a rectangular pixelation seen from a perspective view.
ICLR
Title HexaConv Abstract The effectiveness of convolutional neural networks stems in large part from their ability to exploit the translation invariance that is inherent in many learning problems. Recently, it was shown that CNNs can exploit other sources of invariance, such as rotation invariance, by using group convolutions instead of planar convolutions. However, for reasons of performance and ease of implementation, it has been necessary to limit the group convolution to transformations that can be applied to the filters without interpolation. Thus, for images with square pixels, only integer translations, rotations by multiples of 90 degrees, and reflections are admissible. Whereas the square tiling provides a 4-fold rotational symmetry, a hexagonal tiling of the plane has a 6-fold rotational symmetry. In this paper we show how one can efficiently implement planar convolution and group convolution over hexagonal lattices, by re-using existing highly optimized convolution routines. We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget. Furthermore, we find that the increased degree of symmetry of the hexagonal grid increases the effectiveness of group convolutions, by allowing for more parameter sharing. We show that our method significantly outperforms conventional CNNs on the AID aerial scene classification dataset, even outperforming ImageNet pretrained models. 1 INTRODUCTION For sensory perception tasks, neural networks have mostly replaced handcrafted features. Instead of defining features by hand using domain knowledge, it is now possible to learn them, resulting in improved accuracy and saving a considerable amount of work. However, successful generalization is still critically dependent on the inductive bias encoded in the network architecture, whether this bias is understood by the network architect or not. The canonical example of a successful network architecture is the Convolutional Neural Network (CNN, ConvNet). Through convolutional weight sharing, these networks exploit the fact that a given visual pattern may appear in different locations in the image with approximately equal likelihood. Furthermore, this translation symmetry is preserved throughout the network, because a translation of the input image leads to a translation of the feature maps at each layer: convolution is translation equivariant. Very often, the true label function (the mapping from image to label that we wish to learn) is invariant to more transformations than just translations. Rotations are an obvious example, but standard translational convolutions cannot exploit this symmetry, because they are not rotation equivariant. As it turns out, a convolution operation can be defined for almost any group of transformation — not just translations. By simply replacing convolutions with group convolutions (wherein filters are not ∗Equal contribution just shifted but transformed by a larger group; see Figure 1), convolutional networks can be made equivariant to and exploit richer groups of symmetries (Cohen & Welling, 2016). Furthermore, this technique was shown to be more effective than data augmentation. Although the general theory of such group equivariant convolutional networks (G-CNNs) is applicable to any reasonably well-behaved group of symmetries (including at least all finite, infinite discrete, and continuous compact groups), the group convolution is easiest to implement when all the transformations in the group of interest are also symmetries of the grid of pixels. For this reason, G-CNNs were initially implemented only for the discrete groups p4 and p4m which include integer translations, rotations by multiples of 90 degrees, and, in the case of p4m, mirror reflections — the symmetries of a square lattice. The main hurdle that stands in the way of a practical implementation of group convolution for a continuous group, such as the roto-translation group SE(2), is the fact that it requires interpolation in order to rotate the filters. Although it is possible to use bilinear interpolation in a neural network (Jaderberg et al., 2015), it is somewhat more difficult to implement, computationally expensive, and most importantly, may lead to numerical approximation errors that can accumulate with network depth. This has led us to consider the hexagonal grid, wherein it is possible to rotate a filter by any multiple of 60 degrees, without interpolation. This allows use to define group convolutions for the groups p6 and p6m, which contain integer translations, rotations with multiples of 60 degrees, and mirroring for p6m. To our surprise, we found that even for translational convolution, a hexagonal pixelation appears to have significant advantages over a square pixelation. Specifically, hexagonal pixelation is more efficient for signals that are band limited to a circular area in the Fourier plane (Petersen & Middleton, 1962), and hexagonal pixelation exhibits improved isotropic properties such as twelve-fold symmetry and six-connectivity, compared to eight-fold symmetry and four-connectivity of square pixels (Mersereau, 1979; Condat & Van De Ville, 2007). Furthermore, we found that using small, approximately round hexagonal filters with 7 parameters works better than square 3× 3 filters when the number of parameters is kept the same. As hypothesized, group convolution is also more effective on a hexagonal lattice, due to the increase in weight sharing afforded by the higher degree of rotational symmetry. Indeed, the general pattern we find is that the larger the group of symmetries being exploited, the better the accuracy: p6-convolution outperforms p4-convolution, which in turn outperforms ordinary translational convolution. In order to use hexagonal pixelations in convolutional networks, a number of challenges must be addressed. Firstly, images sampled on a square lattice need to be resampled on a hexagonal lattice. This is easily achieved using bilinear interpolation. Secondly, the hexagonal images must be stored in a way that is both memory efficient and allows for a fast implementation of hexagonal convolution. To this end, we review various indexing schemes for the hexagonal lattice, and show that for some of them, we can leverage highly optimized square convolution routines to perform the hexagonal convolution. Finally, we show how to efficiently implement the filter transformation step of the group convolution on a hexagonal lattice. We evaluate our method on the CIFAR-10 benchmark and on the Aerial Image Dataset (AID) (Xia et al., 2017). Aerial images are one of the many image types where the label function is invariant to rotations: One expects that rotating an aerial image does not change the label. In situations where the number of examples is limited, data efficient learning is important. Our experiments demonstrate that group convolutions systematically improve performance. The method outperforms the baseline model pretrained on ImageNet, as well as comparable architectures with the same number of parameters. Source code of G-HexaConvs is available on Github: https://github.com/ehoogeboom/hexaconv. The remainder of this paper is organized as follows: In Section 2 we summarize the theory of group equivariant networks. Section 3 provides an overview of different coordinate systems on the hexagonal grid, Section 4 discusses the implementation details of the hexagonal G-convolutions, in Section 5 we introduce the experiments and present results, Section 6 gives an overview of other related work after which we discuss our findings and conclude. 2 GROUP EQUIVARIANT CNNS In this section we review the theory of G-CNNs, as presented by Cohen & Welling (2016). To begin, recall that normal convolutions are translation equivariant1. More formally, let Lt denote the operator that translates a feature map f : Z2 → RK by t ∈ Z2, and let ψ denote a filter. Translation equivariance is then expressed as: [[Ltf ] ? ψ](x) = [Lt[f ? ψ]](x). (1) In words: translation followed by convolution equals convolution followed by a translation. If instead we apply a rotation r, we obtain: [[Lrf ] ? ψ](x) = Lr[f ? [Lr−1ψ]](x). (2) That is, the convolution of a rotated image Lrf by a filter ψ equals the rotation of a convolved image f by a inversely rotated filter Lr−1ψ. There is no way to express [Lrf ] ? ψ in terms of f ? ψ, so convolution is not rotation equivariant. The convolution is computed by shifting a filter over an image. By changing the translation to a transformation from a larger group G, a G-convolution is obtained. Mathematically, the GConvolution for a group G and input space X (e.g. the square or hexagonal lattice) is defined as: [f ?g ψ](g) = ∑ h∈X ∑ k fk(h)ψk(g −1h), (3) where k denotes the input channel, fk and ψk are signals defined on X , and g is a transformation in G. The standard (translational) convolution operation is a special case of the G-convolution for X = G = Z2, the translation group. In a typical G-CNN, the input is an image, so we have X = Z2 for the first layer, while G could be a larger group such as a group of rotations and translations. Because the feature map f ?g ψ is indexed by g ∈ G, in higher layers the feature maps and filters are functions on G, i.e. we have X = G. One can show that the G-convolution is equivariant to transformations u ∈ G: [[Luf ] ?g ψ](g) = [Lu[f ?g ψ]](g). (4) Because all layers in a G-CNN are equivariant, the symmetry is propagated through the network and can be exploited by group convolutional weight sharing in each layer. 1Technically, convolutions are exactly translation equivariant when feature maps are defined on infinite planes with zero values outside borders. In practice, CNNs are only locally translation equivariant. 2.1 IMPLEMENTATION OF GROUP CONVOLUTIONS Equation 3 gives a mathematical definition of group convolution, but not an algorithm. To obtain a practical implementation, we use the fact that the groups of interest can be split2 into a group of translations (Z2), and a group H of transformations that leaves the origin fixed (e.g. rotations and/or reflections about the origin3). The G-Conv can then be implemented as a two step computation: filter transformation (H) and planar convolution (Z2). G-CNNs generally use two kinds of group convolutions: one in which the input is a planar image and the output is a feature map on the group G (for the first layer), and one in which the input and output are both feature maps on G. We can provide a unified explanation of the filter transformation step by introducing Hin and Hout. In the first-layer G-Conv, Hin = {e} is the trivial group containing only the identity transformation, while Hout = H is typically a group of discrete rotations (4 or 6). For the second-layer G-Conv, we have Hin = Hout = H . The input for the filter transformation step is a learnable filterbank Ψ of shape C × K · |Hin| × S × S, where C,K, S denote the number of output channels, input channels, and spatial length, respectively. The output is a filterbank of shape C · |Hout|×K · |Hin|×S×S, obtained by applying each h ∈ Hout to each of the C filters. In practice, this is implemented as an indexing operation Ψ[I] using a precomputed static index array I . The second step of the group convolution is a planar convolution of the input f with the transformed filterbank Ψ[I]. In what follows, we will show how to compute a planar convolution on the hexagonal lattice (Section 3), and how to compute the indexing array I used in the filter transformation step of G-HexaConv (Section 4). 3 HEXAGONAL COORDINATE SYSTEMS The hexagonal grid can be indexed using several coordinate systems (see Figure 2). These systems vary with respect to three important characteristics: memory efficiency, the possibility of reusing square convolution kernels for hexagonal convolution, and the ease of applying rotations and flips. As shown in Figure 3, some coordinate systems cannot be used to represent a rectangular image in a rectangular memory region. In order to store a rectangular image using such a coordinate system, extra memory is required for padding. Moreover, in some coordinate systems, it is not possible to use standard planar convolution routines to perform hexagonal convolutions. Specifically, in the Offset coordinate system, the shape of a hexagonal filter as represented in a rectangular memory array changes depending on whether it is centered on an even or odd row (see Figure 4). Because no coordinate system is ideal in every way, we will define four useful ones, and discuss their merits. Figures 2, 3 and 4 should suffice to convey the big picture, so the reader may skip to Section 4 on a first reading. 2To be precise, the group G is a semidirect product: G = Z2 oH . 3The group G generated by compositions of translations and rotations around the origin, contains rotations around any center. 3.1 AXIAL Perhaps the most natural coordinate system for the hexagonal lattice is based on the lattice structure itself. The lattice contains all points in the plane that can be obtained as an integer linear combination of two basis vectors e1 and e2, which are separated by an angle of 60 degrees. The Axial coordinate system simply represents the pixel centered at ue1 + ve2 by coordinates (u, v) (see Figure 2a). Both the square and hexagonal lattice are isomorphic to Z2. The planar convolution only relies on the additive structure of Z2, and so it is possible to simply apply a convolution kernel developed for rectangular images to a hexagonal image stored in a rectangular buffer using axial coordinates. As shown in Figure 3a, a rectangular area in the hexagonal lattice corresponds to a parallelogram in memory. Thus the axial system requires additional space for padding in order to store an image, which is its main disadvantage. When representing an axial filter in memory, the corners of the array need to be zeroed out by a mask (see Figure 4a). 3.2 CUBE The cube coordinate system represents a 2D hexagonal grid inside of a 3D cube (see Figure 5). Although representing grids in three dimensional structures is very memory-inefficient, the cube system is useful because rotations and reflections can be expressed in a very simple way. Furthermore, the conversion between the axial and cube systems is straightforward: x = v, y = −(u+ v), z = u. Hence, we only use the Cube system to apply transformations to coordinates, and use other systems for storing images. A counter-clockwise rotation by 60 degrees can be performed by the following formula: r · (x, y, z) = (−z,−x,−y). (5) Similarly, a mirroring operation over the vertical axis through the origin is computed with: m · (x, y, z) = (x, z, y). (6) 3.3 DOUBLE WIDTH The double width system is based on two orthogonal axes. Stepping to the right by 1 unit in the hexagonal lattice, the u-coordinate is incremented by 2 (see Figure 2c). Furthermore, odd rows are offset by one unit in the u direction. Together, this leads to a checkerboard pattern (Figure 3b) that doubles the image and filter size by a factor of two. The good thing about this scheme is that a hexagonal convolution can be implemented as a rectangular convolution applied to checkerboard arrays, with checkerboard filter masking. This works because the checkerboard sparsity pattern is preserved by the square convolution kernel: if the input and filter have this pattern, the output will too. As such, HexaConv is very easy to implement using the double width coordinate system. It is however very inefficient, so we recommend it only for use in preliminary experiments. 3.4 OFFSET Like the double width system, the offset coordinate system uses two orthogonal axes. However, in the offset system, a one-unit horizontal step in the hexagonal lattice corresponds to a one-unit increment in the u-coordinate. Hence, rectangular images can be stored efficiently and without padding in the offset coordinate system (see Figure 3c). The downside to offset coordinates is that hexagonal convolutions cannot be expressed as a single 2D convolution (see Figure 4c and 4d), because the shape of the filters is different for even and odd rows. Given access to a convolution kernel that supports strides, it should be possible to implement hexagonal convolution in the offset system using two convolution calls, one for the even and one for the odd row. Ideally, these two calls would write to the same output buffer (using a strided write), but unfortunately most convolution implementations do not support this feature. Hence, the result of the two convolution calls has to be copied to a single buffer using strided indexing. We note that a custom HexaConv kernel for the offset system would remove these difficulties. Were such a kernel developed, the offset system would be ideal because of its memory efficiency. 4 IMPLEMENTATION The group convolution can be factored into a filter transformation step and a hexagonal convolution step, as was mentioned in Section 2 and visualized in Figure 1. In our implementation, we chose to use the Axial coordinate system for feature maps and filters, so that the hexagonal convolution can be performed by a rectangular convolution kernel. In this section, we explain the filter transformation and masking in general, for more details see Appendix A. The general procedure described in Section 2.1 also applies to hexagonal group convolution (p6 and p6m). In summary, a filter transformation is applied to a learnable filter bank Ψ resulting in a filter bank Ψ′ than can be used to compute the group convolution using (multiple) planar convolutions (see the top of Figure 1 for a visual portrayal of this transformation). In practice this transformation Table 1: CIFAR-10 performance comparison Model Error Params Z2 11.50 ±0.30 338000 Z2 Axial 11.25 ±0.24 337000 p4 10.08 ±0.23 337000 p6 Axial 9.98 ±0.32 336000 p4m 8.96 ±0.46 337000 p6m Axial 8.64 ±0.34 337000 is implemented as an indexing operation Ψ[I], where I is a constant indexing array based on the structure of the desired group. Hence, after computing Ψ[I], the convolution routines can be applied as usual. Although convolution with filters and feature maps laid out according to the Axial coordinate system is equivalent to convolution on the hexagonal lattice, both the filters and the feature maps contain padding (See Figure 3 and 4), since the planar convolution routines operate on rectangular arrays. As a consequence, non-zero output may be written to the padding area of both the feature maps or the filters. To address this, we explicitly perform a masking operation on feature maps and filters after every convolution operation and parameter update, to ensure that values in the padding area stay strictly equal to zero. 5 EXPERIMENTS We perform experiments on the CIFAR-10 and the AID datasets. Specifically, we compare the accuracy of our G-HexaConvs (p6- and p6m-networks) to that of existing G-networks (p4- and p4m-networks) (Cohen & Welling, 2016) and standard networks (Z2). Moreover, the effect of utilizing an hexagonal lattice is evaluated in experiments using the HexaConv network (hexagonal lattice without group convolutions, or Z2 Axial). Our experiments show that the use of an hexagonal lattice improves upon the conventional square lattice, both when using normal or p6-convolutions. 5.1 CIFAR-10 CIFAR-10 is a standard benchmark that consists of 60000 images of 32 by 32 pixels and 10 different target classes. We compare the performance of several ResNet (He et al., 2016) based G-networks. Specifically, every G-ResNet consists of 3 stages, with 4 blocks per stage. Each block has two 3 by 3 convolutions, and a skip connection from the input to the output. Spatial pooling is applied to the penultimate layer, which leaves the orientation channels intact and allows the network to maintain orientation equivariance. Moreover, the number of feature maps is scaled such that all G-networks are made up of a similar number of parameters. For hexagonal networks, the input images are resampled to the hexagonal lattice using bilinear interpolation (see Figure 6). Since the classification performance of a HexaConv network does not degrade, the quality of these interpolated images suffices. The CIFAR-10 results are presented in Table 1, obtained by taking the average of 10 experiments with different random weight initializations. We note that the the HexaConv CNN (Z2 Axial) outperforms the standard CNN (Z2). Moreover, we find that p4- and p4m-ResNets are outperformed by our p6- and p6m-ResNets, respectively. We also find a general pattern: using groups with an increasing number of symmetries consistently improves performance. 5.2 AID The Aerial Image Dataset (AID) (Xia et al., 2017) is a dataset consisting of 10000 satellite images of 400 by 400 pixels and 30 target classes. The labels of aerial images are typically invariant to rotations, i.e., one does not expect labels to change when an aerial image is rotated. For each experiment, we split the data set into random 80% train/20% test sets. This contrasts the 50% train/test split by Xia et al. (2017). Since the test sets are smaller, experiments are performed multiple times with randomly selected splits to obtain better estimates. All models are evaluated on identical randomly selected splits, to ensure that the comparison is fair. As a baseline, we take the best performing model from Xia et al. (2017), which uses VGG16 as a feature extractor and an SVM for classification. Because the baseline was trained on 50%/50% splits, we re-evaluate the model trained on the same 80%/20% splits. We again test several G-networks with ResNet architectures. The first convolution layer has stride two, and the ResNets take resized 64 by 64 images as input. The networks are widened to account for the increase in the number of classes. Similar to the CIFAR-10 experiments, the networks still consist of 3 stages, but have two blocks per stage. In contrast with the CIFAR-10 networks, pooling is applied to the spatial dimensions and the orientation channels in the penultimate layer. This allows the network to become orientation invariant. The results for the AID experiment are presented in Table 2. The error decreases from an error of 19.3% on a Z2-ResNet, to an impressive error of 8.6% on a p6-ResNet. We found that adding mirror symmetry did not meaningfully affect performance (p4m 10.2% and p6m 9.3% error). This suggests that mirror symmetry is not an effective inductive bias for AID. It is worth noting that the baseline model has been pretrained on ImageNet, and all our models were trained from random initialization. These results demonstrate that group convolutions can improve performance drastically, especially when symmetries in the dataset match the selected group. 6 RELATED WORK The effect of changing the sampling lattice for image processing from square to hexagonal has been studied over many decades. The isoperimetry of a hexagon, and uniform connectivity of the lattice, make the hexagonal lattice a more natural method to tile the plane (Middleton & Sivaswamy, 2006). In certain applications hexagonal lattices have shown to be superior to square lattices (Petersen & Middleton, 1962; Hartman & Tanimoto, 1984). Transformational equivariant representations have received significant research interest over the years. Methods for invariant representations in hand-crafted features include pose normalization (Lowe, 1999; Dalal & Triggs, 2005) and projections from the plane to the sphere (Kondor, 2007). Although approximate transformational invariance can be achieved through data augmentation (Van Dyk & Meng, 2001), in general much more complex neural networks are required to learn the invariances that are known to the designer a-priori. As such, in recent years, various approaches for obtaining equivariant or invariant CNNs — with respect to specific transformations of the data —were introduced. A number of recent works propose to either rotate the filters or the feature maps followed and subsequent channel permutations to obtain equivariant (or invariant) CNNs (Cohen & Welling, 2016; Dieleman et al., 2016; Zhou et al., 2017; Li et al., 2017). Cohen & Welling (2017) describe a general framework of equivariant networks with respect to discrete and continuous groups, based on steerable filters, that includes group convolutions as a special case. Harmonic Networks (Worrall et al., 2016) apply the theory of Steerable CNNs to obtain a CNN that is approximately equivariant with respect to continuous rotations. Deep Symmetry Networks (Gens & Domingos, 2014) are approximately equivariant CNN that leverage sparse high dimensional feature maps to handle high dimensional symmetry groups. Marcos et al. (2016) obtain rotational equivariance by rotating filters followed by a pooling operation that maintains both the angle of the maximum magnitude and the magnitude itself, resulting in a vector field feature map. Ravanbakhsh et al. (2017) study equivariance in neural networks through symmetries in the network architecture, and propose two parameter-sharing schemes to achieve equivariance with respect to discrete group actions. Instead of enforcing invariance at the architecture level, Spatial Transformer Networks (Jaderberg et al., 2015) explicitly spatially transform feature maps dependent on the feature map itself resulting in invariant models. Similarly, Polar Transformer Networks (Esteves et al., 2017) transform the feature maps to a log-polar representation conditional on the feature maps such that subsequent convolution correspond to group (SIM(2)) convolutions. Henriques & Vedaldi (2016) obtain invariant CNN with respect to spatial transformations by warping the input and filters by a predefined warp. Due to the dependence on global transformations of the input, these methods are limited to global symmetries of the data. 7 CONCLUSION We have introduced G-HexaConv, an extension of group convolutions for hexagonal pixelations. Hexagonal grids allow 6-fold rotations without the need for interpolation. We review different coordinate systems for the hexagonal grid, and provide a description to implement hexagonal (group) convolutions. To demonstrate the effectiveness of our method, we test on an aerial scene dataset where the true label function is expected to be invariant to rotation transformations. The results reveal that the reduced anisotropy of hexagonal filters compared to square filters, improves performance. Furthermore, hexagonal group convolutions can utilize symmetry equivariance and invariance, which allows them to outperform other methods considerably.
1. What is the main contribution of the paper regarding convolutional neural networks? 2. How does the approach proposed in the paper improve upon previous work in terms of efficiency and accuracy? 3. What are some potential limitations or areas for improvement in the paper's methodology or results? 4. How might the ideas presented in the paper be applied to more complex problems or domains? 5. Are there any clarity or notation issues in the paper that could be improved?
Review
Review The paper presents an approach to efficiently implement planar and group convolutions over hexagonal lattices to leverage better accuracy of these operations due to reduced anisotropy. They show that convolutional neural networks thus built lead to better performance - reduced inductive bias - for the same parameter budget. G-CNNs were introduced by Cohen and Welling in ICML, 2016. They proposed DNN layers that implemented equivariance to symmetry groups. They showed that group equivariant networks can lead to more effective weight sharing and hence more efficient networks as evinced by better performance on CIFAR10 & CIFAR10+ for the same parameter budget. This paper shows G-equivariance implemented on hexagonal lattices can lead to even more efficient networks. The benefits of using hexagonal lattices over rectangular lattices is well known in the signal processing as well as in computer vision. For example, see Golay M. Hexagonal parallel pattern transformation. IEEE Transactions on Computers 1969. 18(8): p. 733-740. Staunton R. The design of hexagonal sampling structures for image digitization and their use with local operators. Image and Vision Computing 1989. 7(3): p. 162-166. L. Middleton and J. Sivaswamy, Hexagonal Image Processing, Springer Verlag, London, 2005 The originality of the paper lies in the practical and efficient implementation of G-Conv layers. Group-equivariant DNNs could lead to more robust, efficient and (arguably) better performing neural networks. Pros - A good paper that systematically pushes the state of the art towards the design of invariant, efficient and better performing DNNs with G-equivariant representations. - It leverages upon the existing theory in a variety of areas - signal & image processing and machine learning, to design better DNNs. - Experimental evaluation suffices for a proof of concept validation of the presented ideas. Cons - The authors should relate the paper better to existing works in the signal processing and vision literature. - The results are on simple benchmarks like CIFAR-10. It is likely but not immediately apparent if the benefits scale to more complex problems. - Clarity could be improved in a few places : Since * is used for a standard convolution operator, it might be useful to use *_g as a G-convolution operator. : Strictly speaking, for translation equivariance, the shift should be cyclic etc. : Spelling mistakes - authors should run a spellchecker.
ICLR
Title ON INJECTING NOISE DURING INFERENCE Abstract We study activation noise in a generative energy-based modeling setting during training for the purpose of regularization. We prove that activation noise is a general form of dropout. Then, we analyze the role of activation noise at inference time and demonstrate it to be utilizing sampling. Thanks to the activation noise we observe about 200% improvement in performance (classification accuracy). Later, we not only discover, but also prove that the best performance is achieved when the activation noise follows the same distribution both during training and inference. To explicate this phenomenon, we provide theoretical results that illuminate the roles of activation noise during training, inference, and their mutual influence on the performance. To further confirm our theoretical results, we conduct experiments for five datasets and seven distributions of activation noise. 1 INTRODUCTION Whether it is for performing regularization (Moradi et al., 2020) to mitigate overfitting (Kukacka et al., 2017) or for ameliorating the saturation behavior of the activation functions, thereby aiding the optimization procedure (Gulcehre et al., 2016), injecting noise to the activation function of neural networks has been shown effective (Xu et al., 2012). Such activation noise denoted by z, is added to the output of each neuron of the network (Tian & Zhang, 2022) as follows: t = s+ z = f( ∑ i wixi + b) + αz̄ (1) where wi, xi, b, s, f(·), α, z̄, and t stand for the ith element of weights, the ith element of input signal, the bias, the raw/un-noisy activation signal, the activation function, the noise scalar, the normalized noise (divorced from its scalar α) originating from any distribution, and the noisy output, respectively. Studying this setting inflicted by noisy units is of significance, because it resembles how neurons of the brain in the presence of noise learn and perform inference (Wu et al., 2001). In the literature, training with input/activation noise has been shown to be equivalent to loss regularization: a well-studied regularization scheme in which an extra penalty term is appended to the loss function. Also, injecting noise has been shown to keep the weights of the neural network small, which is reminiscent of other practices of regularization that directly limit the range of weights (Bishop, 1995). Furthermore, injecting noise to input samples (or activation functions) is an instance of data augmentation (Goodfellow et al., 2016). Injecting noise practically expands the size of the training dataset, because each time training samples are exposed to the model, random noise is added to the input/latent variables rendering them different every time they are fed to the model. Noisy samples therefore can be deemed as new samples which are drawn from the domain in the vicinity of the known samples: they make the structure of the input space smooth, thereby mitigating the curse of dimensionality and its consequent patchiness/sparsity of the datasets. This smoothing makes it easier for the neural network to learn the mapping function (Vincent et al., 2010). In the existing works, however, the impacts of activation noise have been neither fully understood during training time nor broached at the inference time, not to mention the lack of study on the relationship between having activation noise at training and inference, especially for the generative energy-based modeling (EBM). In this paper, we study those issues: for the EBM setting, for the first time, we study the empirical and theoretical aspects of activation noise not only during training time but also at inference and discuss how these two roles relate to each other. We prove that, during training, activation noise (Gulcehre et al., 2016) is a general form of dropout (Srivastava et al., 2014). This is interesting because dropout has been widely adopted as a regularization scheme. We then formulate and discuss the relationship between activation noise and two other key regularization schemes: loss regularization and data augmentation. We also prove that, during inference, adopting activation noise can be interpreted as sampling the neural network. Accordingly, with activation noise during inference, we estimate the energy of the EBM. Surprisingly, we discover that there is a very strong interrelation between the distribution of activation noise during training and inference: the performance is optimized when those two follow the same distributions. We also prove how to find the distribution of the noise during inference to minimize the inference error, thereby maximizing the performance as high as 200%. Overall, our main contributions in this paper are as follows: • We prove that, during training, activation noise is a general form of dropout. Afterward, we establish the connections between activation noise and loss regularization/data augmentation. With activation noise during inference as well as training, we observe about 200% improvement in performance (classification accuracy), which is unprecedented. Also, we discover/prove that the performance is maximized when the noise in activation functions follow the same distribution during both training and inference. • To explain this phenomenon, we provide theoretical results that illuminate the two strikingly distinct roles of activation noise during training and inference. We later discuss their mutual influence on the performance. To examine our theoretical results, we provide extensive experiments for five datasets, many noise distributions, various noise values for the noise scalar α, and different number of samples. 2 RELATED WORKS Our study touches upon multiple domains: (i) neuroscience, (ii) regularization in machine learning, (iii) generative energy-based modeling, and (iv) anomaly detection and one-class classification. (i) Studying the impact of noise in artificial neural networks (ANNs) can aid neuroscience to understand the brain’s operation (Lindsay, 2020; Richards et al., 2019). From neuroscience, we know that neurons of the brain (as formulated by Eq. 1) never produce the same output twice even when the same stimuli are presented because of their internal noisy biological processes (Ruda et al., 2020; Wu et al., 2001; Romo et al., 2003). Having a noisy population of neurons if anything seems like a disadvantage (Averbeck et al., 2006; Abbott & Dayan, 1999); then, how does the brain thwart the inevitable and omnipresent noise (Dan et al., 1998)? We provide new results on top of current evidence that noise can indeed enhance both the training (via regularization) and inference (by error minimization) (Zylberberg et al., 2016; Zohary et al., 1994). (ii) Injecting noise to neural networks is known to be a regularization scheme: regularization is broadly defined as any modification made to a learning algorithm that is intended to mitigate overfitting: reducing the generalization error but not its training error (Kukacka et al., 2017). Regularization schemes often seek to reduce overfitting (reduce generalization error) by keeping weights of neural networks small (Xu et al., 2012). Hence, the simplest and most common regularization is to append a penalty to the loss function which increases in proportion to the size of the weights of the model. However, regularization schemes are diverse (Moradi et al., 2020); in the following, we review the popular regularization schemes: weight regularization (weight decay) (Gitman & Ginsburg, 2017) penalizes the model during training based on the magnitude of the weights (Van Laarhoven, 2017). This encourages the model to map the inputs to the outputs of the training dataset such that the weights of the model are kept small (Salimans & Kingma, 2016). Batch-normalization regularizes the network by reducing the internal covariate shift: it scales the output of the layer, by standardizing the activations of each input variable per mini-batch (Ioffe & Szegedy, 2015). Ensemble learning (Zhou, 2021) trains multiple models (with heterogeneous architectures) and averages the predictions of all of them (Breiman, 1996). Activity regularization (Kilinc & Uysal, 2018b) penalizes the model during training based on the magnitude of the activations (Deng et al., 2019; Kilinc & Uysal, 2018a). Weight constraint limits the magnitude of weights to be within a range (Srebro & Shraibman, 2005). Dropout (Srivastava et al., 2014) probabilistically removes inputs during training: dropout relies on the rationale of ensemble learning that trains multiple models. However, training and maintaining multiple models in parallel inflicts heavy computational/memory expenses. Alternatively, dropout proposes that a single model can be leveraged to simulate training an expo- nential number of different network architectures concurrently by randomly dropping out nodes during training (Goodfellow et al., 2016). Early stopping (Yao et al., 2007) monitors the model’s performance on a validation set and stops training when performance starts to degrade (Goodfellow et al., 2016). Data augmentation, arguably the best regularization scheme, creates fake data and augments the training set (Hernandez-Garcia & Konig, 2018). Label smoothing (Lukasik et al., 2020) is commonly used in training deep learning (DL) models, where one-hot training labels are mixed with uniform label vectors (Meister et al., 2020). Smoothing (Xu et al., 2020) has been shown to improve both predictive performance and model calibration (Li et al., 2020b; Yuan et al., 2020). Noise schemes inject (usually Gaussian) noise to various components of the machine learning (ML) systems: activations, weights, gradients, and outputs (targets/labels) (Poole et al., 2014). In that, noise schemes provide a more generic and therefore more applicable approach to regularization that is invariant to the architectures, losses, activations of the ML systems, and even the type of problem at hand to be addressed (Holmstrom & Koistinen, 1992). As such, noise has been shown effective for generalization as well as robustness of a variety of ML systems (Neelakantan et al., 2015). (iii) Our simulation setting in this paper follows that of Generative EBM (we briefly say EBM henceforth) (LeCun et al., 2006). EBM (Nijkamp et al., 2020) is a class of maximum likelihood model that maps each input to an un-normalized scalar value named energy. EBM is a powerful model that has been applied to many different domains, such as structured prediction (Belanger & McCallum, 2016), machine translation (Tu et al., 2020), text generation (Deng et al., 2020), reinforcement learning (Haarnoja et al., 2017), image generation (Xie et al., 2016), memory modeling (Bartunov et al., 2019), classification (Grathwohl et al., 2019), continual learning (Li et al., 2020a), and biologicallyplausible training (Scellier & Bengio, 2017). (iv) Our EBM setting leverages separate autoencoders (Chen et al., 2018) for classification, in that it resembles anomaly detection scenarios (Zhou & Paffenroth, 2017; An & Cho, 2015) and also one-class classification (Ruff et al., 2018; Liznerski et al., 2020; Perera & Patel, 2019; Sohn et al., 2020). 3 EBM AND ACTIVATION NOISE DURING TRAINING AND INFERENCE EBM is a class of maximum likelihood model that determines the likelihood of a data point x ∈ X ⊆ RD using the Boltzmann distribution: pθ(x) = exp(−Eθ(x)) Ω(θ) , Ω(θ) = ∫ x∈X exp(−Eθ(x))dx (2) where Eθ(x) : RD → R, known as the energy function, is a neural network parameterized by θ, that maps each data point x to a scalar energy value, and Ω(θ) is the partition function. To solve the classification task in EBM utilizing activation noise, we adjust the general formulation of EBM in Eq. 2 as follows: given a class label y in a discrete set Y and the activation noise z during training, for each input x, we use the Boltzmann distribution to define the conditional likelihood as follows: pθ(x | y,z) = exp (−Eθ(x | y,z)) Ω(θ | y,z) , Ω(θ | y,z) = ∫ x∈X exp (−Eθ (x | y,z)) dx (3) where Eθ(x | y,z) : (RD,N,RF ) → R is the energy function that maps an input given a label and noise to a scalar energy value Eθ(x | y,z), and Ω(θ | y,z) is the normalization function. 3.1 TRAINING WITH ACTIVATION NOISE During training, we want the distribution defined by Eθ to model the data distribution pD(x, y), which we achieve by minimizing the negative log likelihood LML(θ, q(z)) of the data as follows: (θ∗, q∗(z)) = argmin θ,q(z) LML(θ, q(z)), LML(θ, q(z)) = E(x,y)∼pD; z∼q(z) [− log pθ(x | y,z)] (4) where q(z) is the distribution of the activation noise z during training. We explicate the relationships between activation noise during training and (i) dropout, (ii) loss regularization, and (iii) data augmentation. We start by presenting theoretical results illuminating that activation noise is a general form of dropout. For that, we define the negate distribution of a primary distribution, based on which we derive Activation Noise Generality Proposition, stating that dropout is a special case of activation noise. Definition 1 (Negate Random Variable): We define random variable W : Σ → E as the negate of random variable X : ∆ → F , denoted by X ∦ W (where Σ,∆,E,F ⊂ R) if the outcome of W negates the outcome of X; mathematically speaking, x+ w = 0. Proposition 1 (Activation Noise Generality Proposition): When the noise at a given neuron comes from the negate random variable Z with respect to the signal S, i.e., Z ∦ S, the activity noise drops out (the signal of) that neuron. Specifically, the summation of signal and noise becomes zero, s+ z = 0 for all outcomes. Proof: The proof follows from the definition of the negate random variable. This theoretical result implies that activation noise can be considered as a general form of dropout: Fig. 1 visualizes how the activation noise reduces to dropout. In Fig. 2 we compare the performances of activation noise with dropout with the simulation setting as for Fig. 3. Now we explain how activation noise relates to loss regularization and data augmentation. To that end, we consider the EBM setting leveraging multiple distinct autoencoders for classification: one autoencoder is used for each class. We first write the empirical error without noise in the form of the mean squared error (MSE) loss function as follows: Isθ = ∑ y∈Y ∫ x ∥fθ(x | y)− x∥2pD(x, y)dx = ∑ y∈Y ∑ k ∫ xk [ fkθ (x | y)− xk ]2 pD(x, y)dxk (5) where fθ(x | y) denotes the model parameterized by θ given label y, and the energy is determined by Eθ(x | y) = ∥fθ(x | y) − x∥2. Meanwhile, fkθ (x | y) and xk refer to the kth element of the output of the model and desired target (i.e., the original input), respectively. With activation noise, however, we have Iθ = ∑ y∈Y ∑ k ∫ z ∫ xk [ fkθ (x | y,z)− xk ]2 pD(x, y)q(z)dxkdz (6) where fkθ (x | y,z) denotes the kth element of the output of the noisy model. The noise z comes from distribution q(z) during training. Expanding the network function fkθ (x | y,z) into the signal response fkθ (x | y) and the noisy response hkθ(x | y,z) to give fkθ (x | y,z) = fkθ (x | y) + hkθ(x | y,z), (7) we can write Eq. 6 as Iθ = Isθ + I z θ , where I z θ encapsulates the noisy portion of the loss given by Izθ = ∑ y∈Y ∑ k ∫ z ∫ xk [ hkθ(x | y,z)2 + 2hkθ(x | y,z)(fkθ (x | y)− xk) ] pD(x, y)q(z)dxkdz. (8) The term Izθ can be viewed as the loss regularizer, specifically Tikhonov regularizer, as has been shown in (Bishop, 1995). In general, not just for MSE as we have done, but for any arbitrary loss function v(fθ(x | y,z),x), we can re-write it as a combination of the losses pertaining to the signal component plus the noise component and then the same result would hold. As suggested before, the second term of error, besides being a regularizer, can also be seen as the loss for an auxiliary dataset that is interleaved with the primary dataset during training. Hence, this way it can be said that the noise augments the dataset (Goodfellow et al., 2016). This section concerned how activation noise relates to dropout, loss regularization, and data augmentation. As alluded to, activation noise during training is beneficial for the performance; but, what if we used activation noise also during inference? We will now answer this question: for EBM, activation noise can be also beneficial in inference. Specifically, we will first present the role of activation noise during inference in the EBM framework and then present the experiments yielding surprising results demonstrating the effectiveness of activation noise during inference. 3.2 INFERENCE WITH ACTIVATION NOISE Consider the inference phase assuming training has been done (i.e., when θ∗ has been determined). Given a test data point x, we estimate the energy of our trained model Eθ∗(x | y,z) with many different noise realizations z following inference noise distribution r(z) which are averaged over to produce the energy. Therefore, the noise distribution r(z) at the inference can be considered as a sampler. Probabilistically speaking, we measure the expectation of the energy Eθ∗(x | y,z) with respect to distribution r(z) as follows: Ēθ∗(x | y) = Ez [Eθ∗(x | y,z)] = ∫ z Eθ∗(x | y,z)r(z)dz. (9) The variance of Eθ∗(x | y,z), is determined by σ2 = ∫ z Eθ∗(x | y,z)2r(z)dz − Ēθ∗(x | y)2. In practice, because the calculation of the integral in Eq. 9 is intractable, we perform the inference via Êθ∗(x | y) = 1n ∑n i=1 Eθ∗(x | y,z(i)) where the integral in Eq. 9 is numerically approximated via sampling from the distribution r(z) which generates the noise samples {z(i)}. Finally, given an input x, the class label predicted by our EBM is the class with the smallest energy at x, we find the target class via ŷ = argmin y′∈Y Êθ∗ (x | y′) . This approach of classification is a common formulation for making inference which is derived from Bayes’ rule. There is one difference, however, and that is in EBM classification we seek the class whose energy is the minimum as the class that the input data belongs to. In the end note that, as discussed in this section, activation noise not only generalizes dropout during training, as a regularization scheme, but also offers the opportunity of sampling the model during inference (to minimize the inference error) possibly using a wide range of noise distributions as the sampler; this is advantageous to dropout that is only applicable during the training. In the next section, we will first present the simulation settings detailing the architectures used to conduct the experiments and then the consequent results. 4 SIMULATION SETTING AND RESULTS 4.1 SIMPLE REPRESENTATIVE SIMULATION RESULTS For the purpose of illustration, we first report only a part of our simulation setting pertaining to only one dataset, a part that is representative of the general theme of our results in this paper. This suffices for initiating the motivation required for further theoretical and empirical studies. In the next subsections, we will provide a comprehensive account of our experiments. In this simulation, we trained 10 autoencoders, one for each class of CIFAR-10 dataset. Our autoencoders incorporate the noise z̄ which is a random variable following the standard Gaussian distribution (zero mean and unit variance) that is multiplied by the scalar α for each neuron of each layer of the model as presented in Eq. 1. We use convolutional neural networks (CNNs) with channel numbers of [30, 73, 100] for the encoder, and the mirror of it for the decoder. The stride is 2, padding is 1, and the window size is 4×4. We used MSE as the loss function although binary cross-entropy (BCE) has also been tried and produced similar results. The number of samples n is set to 1,000. This simulation setting is of interest in incremental learning where only one or a few classes are present at a time (van de Ven et al., 2021). Moreover, doing classification via generative modeling fits in the framework of predictive coding which is a model of visual processing which proposes that the brain possesses a generative model of input signal with prediction loss contributing as a mean for both learning and attention (Keller & Welling, 2021). Fig. 3 shows the performances (classification accuracies) in terms of the scalar α of Gaussian noise during both training and test. The omnipresent activation noise which is present in both training and inference exhibits two surprising behaviors in our simulations: (i) it significantly improves the performance (about 200%), which is unprecedented among similar studies in the literature. (ii) The other interesting observation is that always being on or close to the diagonal leads to the best results. These observations will be discussed. 4.2 THEORIES FOR THE SIMULATION RESULTS We first ask the question that why is noise so effective in improving the performance? For that, we need to reiterate that noise is playing two different roles at (i) training and (ii) inference. For the first role, as alluded to before, what actually activation noise is doing during training is regularization. We suggested that indeed activation noise is a general case of dropout. We later demonstrated that activation noise is a form of loss regularization, and also an instance of data augmentation. All these categories fall under the umbrella of regularization. Meanwhile, we learned that, during the test phase, activation noise does sampling which reduces the inference error. In aggregation, therefore, regularization hand in hand with sampling account for this performance boost (about 200%). The other interesting observation in our simulations is that to attain the highest performance, it is best for the noise scalar α (and in general for the distributions) of training and test noise to be equal. This can be clearly observed in Fig. 3 (and also in all other numerical results that we will present later). We ask the question that why is there such a strong correlation? Shouldn’t the two roles of noise during training and test be mutually independent? Where and how do these two roles link to each other? Upon empirical and theoretical investigations, we concluded that the inter-relation between noise during training and test can be characterized and proved in the following theorem. Theorem 1 (Noise Distribution Must be the Same During Training and Test): Via having the inference noise (i.e., the sampler) r(z) following the same distribution of noise during training q∗(z), i.e., r∗(z) = q∗(z), the loss of the inference is minimized. Proof: During the inference, to optimize the performance of the EBM model, the objective is to find the distribution r∗(z) for the activation noise that minimizes the loss specified as follows: r∗(z) = argmin r(z) LML(θ∗, r(z)), LML(θ∗, r(z)) = E(x,y)∼pD; z∼r(z) [− log pθ∗(x | y,z)] . (10) Meanwhile, from Eq. 4 we know that training is performed to minimize the loss LML(θ∗, q∗(z)) with distribution q∗(z) as the activation noise. Therefore, during inference, if we set the activation noise r∗(z) the same as q∗(z) in Eq. 10, the loss again will be minimal. □ This implies that, during training, the distribution of our trained model is impacted by the distribution of noise q(z) via the second term of loss function in Eqs. 7 where the network function learns the distribution of signal in presence of a certain noise distribution. During inference, when the noise distribution is the same as in the training, the performance is the best, and as soon as there is a discrepancy between the two distributions, the performance degrades. 4.3 COMPREHENSIVE SIMULATION RESULTS FOR GAUSSIAN NOISE We present extensive simulation results for various datasets (using Google Colab Pro+) considering Gaussian activation noise. We used the same architecture as the one presented in Section 4.1 for all of our five datasets: CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), CalTech-101 (Fei-Fei et al., 2004), Flower-102 (Nilsback & Zisserman, 2008), and CUB-200 (Wah et al., 2011). Given that the volume of our experiments is large, for computational tractability we slimmed down the larger datasets from about 100 (or 200) classes to 20 classes. Our comprehensive simulation results explore the performances of our proposed EBM framework across five dimensions: (i) for five datasets we examined the joint impacts of various noise distributions. The noise scalar α varies during both (ii) training and (iii) test time within the range of [0, 2) for 20 values with the step size of 0.1. However, we do not report all of them due to the page limit; instead, we selectively present values that are representative of the two key observations which we intend to highlight: these values are (0.5, 0.5), (0.5, 0.9), (0.5, 1.5), (0.9, 0.5), (0.9, 0.9), (0.9, 1.5), and (1.5, 1.5), where the pair (·, ·) denotes the scalar α of noise at training and test. (iv) The number of samples n is set to 104, except for the cases of no/zero test noise in which n = 1, because when the scalar α of noise during test is 0 (no noise) such as in rows (0.0, 0.0) and (0.5, 0.0), the corresponding accuracy would not vary. Finally, (v) we assess the performances for different noise distributions via exploring all of their permutations. In our simulations we resize the resolution of all images for our datasets to 32×32. The number of epochs is set to 100. For optimizer, Adam is chosen with the default learning rate of 0.001 and default momentum (Kingma & Ba, 2014). The minibatch size is 256. We ran 10 experiments for each reported number in Table 1 and we present the means as well as standard error of the means (SEMs) over these runs. The first observation we intend to emphasize is that this very activation noise can contribute to a considerable improvement in performance as high as 200%. This can be discerned in Table 1 when comparing the row (0.0, 0.0) pertaining to the case with no noise, with the row (0.9, 0.9). When going from the former case to the latter case, the accuracy jumps from 20.48% to 65.88% for CIFAR-10 dataset with n = 10, 000 samples. The same phenomenon can also be observed for all other datasets. Compared with the previous studies in the literature on the effectiveness of noise, this level of performance enhancement that is acquired by injecting a simple noise is unprecedented if not unimaginable. This performance enhancement, as we theorized and proved in Theorem 1, is a result of the combination of both regularization and sampling, not in an independent way, but indeed in a complicated interdependent fashion (see Fig. 3 to discern the interdependency). The second observation signifies the importance of having a balance in injecting noise at training and test: the noise distribution at test r(z) ought to follow the distribution of the noise during training q(z) so that the loss of the inference is minimized as discussed in Theorem 1. This can be noticed when contrasting balanced rows (0.5, 0.5), (0.9, 0.9), and (1.5, 1.5) with the rest which are unbalanced. Interestingly, even the row (0.0, 0.0) which enjoys no noise outperforms unbalanced rows (0.0, 0.5), (0.5, 0.9), (0.5, 1.5), and (0.9, 1.5). The third observation is the asymmetry between having unbalanced noise for training versus test. Too large a scalar α for noise during test has far more negative impact on the performance than otherwise. For example, consider rows (0.5, 0.0) and (0.0, 0.5), the former (large noise scalar α at training) delivers about 100% higher accuracies than the latter which bears high noise scalar α at test. This pattern, with no exception, repeats in all rows where the noise scalar during test is larger than that of the training noise such as (0.0, 0.5), (0.5, 0.9), (0.5, 1.5), and (0.9, 1.5). All these cases yield poorest performances even worse than (0.0, 0.0). This observation (as well as the preceding two observations) are better discerned in Fig. 3 which portrays the heatmap of performance results. 4.4 INVESTIGATING OTHER NOISE DISTRIBUTIONS We now evaluate the performances of alternative noise distributions. We consider both types of distributions, symmetric and asymmetric: (i) Cauchy, (ii) uniform, (iii) Chi, (iv) lognormal, (v) t-Distribution, and (vi) exponential. For noise distributions that are capable of becoming either symmetric or not (e.g., uniform and Cauchy), we explore parameters that keep them symmetric around zero; because based on our experiments (as we will see) we are convinced that symmetric distributions (i.e., symmetric about zero) consistently yield higher performances. For different noise distributions, the underlying rule that justifies the results can be summarized as follows: when the activation noise z is added to the signal s, it is conducive to the performance as far as it does not entirely distort the signal. In other words, the Signal-to-Noise Ratio (SNR) can be beneficially decreased to a threshold that delivers the peak performance; after that the performance begins to degrade. Given that t = s + z = s + αz̄, and s is constant, there are two factors that determine the value of the SNR (given by var[s]/ var[z]): (i) the distribution of z̄, particularly how narrowly (or broadly) the density of the random variable is distributed (i.e., var[z̄]); (ii) how large the scalar of α is. Fig. 4 presents the profile of different probability distributions: note the breadth of various probability distributions as they play a vital role in how well they perform. In the following we discuss our results with respect to the value of the SNR. As we can see in Fig. 4, (i) adopting symmetric probability distributions as activation noise substantially outperforms asymmetric ones: hence, we observe that Cauchy, Gaussian, uniform, and t distribution consistently yield higher accuracies than Chi, exponential, and lognormal. This is because having a symmetric sampling can more effectively explore the learned manifold of the neural network. (ii) The performances of the different noise distributions can be justified with respect to the value of the SNR as follows: as we reduce the SNR via increasing the noise scalar α, the performance rises up to a peak and then falls; this is valid for all the probability distributions. The differences in the slope of various distributions (in Fig. 4 on right) originate from the difference in the profile of the noise distributions: the wider the noise profile is (the larger var[z̄]), the sooner the SNR will drop, and the earlier the pattern of rise and fall will occur. For example, in Fig. 4 because the Gaussian distribution is thinner than the Cauchy, its pattern of rise and fall happens with a small delay since its SNR drops with a latency. 5 CONCLUSION In this paper, specifically for an EBM setting, we studied the impacts of activation noise during training time, broached its role at the inference time, and scrutinized the relationship between activation noise at training and inference. We proved that, during training, activation noise is a general form of dropout; we discussed the relationship between activation noise and loss regularization/data augmentation. We studied the empirical/theoretical aspects of activation noise at inference time and proved that adopting noisy activation functions, during inference, can be interpreted as sampling the model. We proved/demonstrated that when the sampler follows the same distribution as the noise during training, the loss of the inference is minimized and therefore the performance is optimized. To assess our theoretical results, we performed extensive experiments exploring different datasets, noise scalars during training and inference, and different noise distributions. A MORE DETAILS FOR THE PERFORMANCE RESULTS OF VARIOUS NOISE DISTRIBUTIONS In the following we provide the specifications of our probability distributions: as shown in Fig. 5 for Cauchy distribution g(z̄; z0, γ), we set z0, which characterizes the mean of the Cauchy distribution, equal to zero; and explore different values of γ. The same way for uniform distribution, the mean is set to zero: while uniform distribution originally has two parameters characterizing the start and end of the random variable, in our simulation, because the uniform noise is desired to be symmetric, we define only one parameter characterizing the uniform distribution denoted by ω referring to its width. For Chi distribution g(z̄; k) we assess its performance with different values of k, whereas for lognormal distribution g(z̄;µ, σ), the value of µ is set to zero and σ is varied. For t-distribution g(z̄; ν), different values of ν are explored. Worth mentioning that for t-distribution when ν = 1, the distribution reduces to Cauchy distribution. Finally, for exponential distribution g(z̄;λ), we evaluate the performance for different values of λ. Our experiments are performed 10 times via multiple random seeds. We report the means (± SEMs) over these experiments. Overall, based on the performances across all values of the noise scalar α (as shown in Fig. 6), we conclude that the best noise distribution is also the most popular one: Gaussian distribution; perhaps because it has the largest area under the curve for different values of the noise scalar α. Furthermore, it is clear that the distributions that are similar to the Gaussian distribution deliver best performances whereas for dissimilar ones the performance becomes worse. Meanwhile, we realized that various parameters (standard deviations σ) for the Gaussian distribution do not offer convincing improvements and only hasten/delay the occurrence of the rise and fall pattern for classification accuracy. B IMPACT OF THE NUMBER OF SAMPLES In Fig. 7 we demonstrate the impact of the number of samples for the standard Gaussian noise z̄ with different noise scalar values α during training and test. It can be seen that the accuracy rises almost logarithmically as the number of samples increases. Note that these results pertain to CIFAR-10 dataset. In Table 2 we provide the comprehensive form of the Table 1 including the classification accuracies for different number of samples n. C DIFFERENT NOISE DISTRIBUTIONS DURING TRAINING AND INFERENCE We examine the performances pertaining to the combination of nonidentical noise distributions: adopting two different distributions, one for training and the other for inference. The parameters of different noise distributions are outlined in Table 3. The noise scalar α is set to one for training and test. The results of the experiment are displayed in Fig. 8 that demonstrates the performances for each two noise distributions in a 7×7 heatmap grid: the empirical results confirm our theoretical conclusions that was proposed in Theorem 1, proving that the optimal performance during inference is acquired if we have the same noise distributions during training and inference. In Fig. 8 it can be seen that the best results pertain to our four symmetric noise distributions, whereas the worst are for when an asymmetric noise is adopted for training and a symmetric noise for inference (to sample). This result is in accordance with our third observation discussed in Section 4.3 and also further corroborates Theorem 1. In Fig. 8, we can see that if we use symmetric noise during training and asymmetric noise during inference the performance will not be as poor as otherwise. This was also observed and explained in Section 4.4. D LAYER-SELECTIVE NOISE FOR THE EBM SETTING So far, we have injected noise to all the layers of our neural networks. In this section, we study the case where we inject noise only to a portion of layers of the generative EBM model. We design and compare five schemes for noise injection: (i) full noise (the default) where noise is injected to all layers (denoted by n-full); (ii) and (iii) odd/even noise where noise is injected to all odd/even layers (denoted by n-odd and n-even); (iv) and (v) encoder/decoder noise where noise is injected to encoder/decoder layers (denoted by n-enc and n-dec). Worth mentioning that we used standard Gaussian noise with scalar α = 1 pertaining to the peak performance for both training and test. Fig. 9 presents the accuracies for our neural network injected with all the noise schemes presented above. As it can be seen, injecting noise to all layers yields higher performances than the alternatives; n-even and n-odd are at the next rank while n-enc and n-dec are the worse. Fig. 9 also jointly investigates the impact of injecting noise to different layers of neural network at both training and inference. We observe the results akin to the previous results in Fig. 3 once again but for each layer: according to our results the noise injected to a specific layer during training and inference better to follow the same distribution. For example, when noise is injected to even layers (n-even) of the neural network during training, none of the other noise injection schemes during inference could exceed the performance of n-even noise injection. Also, not injecting noise during training for a specific layer while injecting noise during inference for that layer significantly reduces the performance. Accordingly, inspired by above results we can propose Corollary 1 that generalizes Theorem 1. Corollary 1 (General Noise Distribution for the Sampler During Test): Via having the sampler r(z, v) following both the same placement and distribution of noise during training q∗(z, v), i.e., r∗(z, v) = q∗(z, v), where v encodes the placement of noise among the layers, the loss of the inference is minimized. Clearly, when the placement parameter v is selected such that all layers are included for noise injection, Corollary 1 reduces to Theorem 1. E ACTIVATION NOISE IN DISCRIMINATIVE MODELING So far, we have considered only the generative EBM modeling. In this section, we examine the impact of activation noise on the performance of classifiers relying on discriminative modeling: we conduct our study with two popular datasets, CIFAR-10 and CIFAR-100, on seventeen architectures as listed in Table 4. In this experiment, we train for 30 epochs, the optimizer was Adam with learning rate 0.001 and the default momentum, and the learning rate scheduler is cosine annealing similarity. Fig. 10 demonstrates the results of our experiments: as we can see in Fig. 10 exhibiting the results of the CIFAR-10 dataset for different architectures, the performances reliably deteriorate as we increase the scalar of the omnipresent (standard Gaussian) activation noise even with balanced noise enjoying n = 1000 samples. Meanwhile, in Table 5, we report the performances of all architectures for both CIFAR-10 and CIFAR-100 with three noise levels: (i) no noise α = 0.0, (ii) mild noise α = 0.5, and (iii) strong noise α = 1.0. Based on our observations, we are convinced that adding even a mild level of balanced noise is detrimental to the performance of the discriminative modeling scenario for different architectures and datasets. To also investigate the scenario for unbalanced noise injection, we provide Fig. 11 that shows the performance heatmap of the discriminative modeling classifier using ResNet50 architecture on CIFAR-10 dataset as we increase the noise scalar α during both the training and inference. We can discern that in the unbalanced noise case, the performance drop is more severe than the balanced case. Moreover, we made three observations in Fig. 11 that are noteworthy; these observations are reminiscent of the observations that were made in Section 4. (i) In Fig. 11, the diagonal represents the balanced noise injection. It can be seen that the performance consistently deteriorates when moving from top-left entry (no noise) to bottom-right entry (intense noise); if we recall, this is exactly the opposite of the observation made in Fig. 3: there, injecting balanced noise reliably and significantly increased the performance. (ii) We observe that in Fig. 11 unbalanced noise worsens the performance which is in accordance with the observation in Fig. 3. Finally, (iii) just as in Fig. 3, there is an asymmetry in performance for having an unbalanced noise: strong noise during inference with weak noise during training delivers noticeably worse performances than otherwise. Our results for CIFAR-10 dataset in Figs. 10 and 11 carry over to CIFAR-100 dataset; hence, we refrain from reporting the results pertaining to CIFAR-100 in separate figures. From the results that have been reported in Figs. 10 and 11, and also in Table 5, it becomes clear that, in contrast to generative modeling, in discriminative modeling, the activation noise (injected to all-layers in both training and inference) indeed worsens the performance. F ON WHY ACTIVATION NOISE IS EFFECTIVE FOR GENERATIVE MODELING BUT NOT DISCRIMINATIVE MODELING We do not have any rigorous mathematical proof showing why activation noise is effective for generative modeling but not discriminative modeling. However, we can conjecture the reason. To this end, we need to take a step back and ponder about the function of neural networks. As we know, in the broad mathematical sense, neural networks are best understood as approximating often a nonlinear target function j(·) that maps input variable X to an output variable Y , where X and Y come from training data D. Specifically, the aim is learning the target function j(·) from training data D via having j(·) to perform curve-fitting on the training data D. For both the discriminative and generative modeling scenario, the dimensions of the input space X is the same; however, when it comes to the output space Y the difference emerges: in a discriminative modeling setting, the output space has far fewer dimensions than that of generative modeling. For example, in the CIFAR-10 dataset, the output space Y for the generative model has 32×32=1024 dimensions whereas the discriminative model has only 10 dimensions. Meanwhile, we know that as soon as the number of dimensions grows, the curse of dimensionality comes to play: our training data D now becomes exponentially insufficient; the data becomes sparsely distributed in the space, and instead of having a smooth continuous manifold of the data which is desired, we will have patches of data scattered in the space. This makes it hard for our j(·) to map the input to the output because neural networks, as hinted in Universal Approximation Theorem (Hornik et al., 1989), in the strict sense cannot, and in the practical sense, severely suffer when approximating functions whose samples in the input space are not smooth. The activation noise, as proved in the main text where we linked activation noise with data augmentation, imputes the primary dataset D with an auxiliary dataset D′ to smooth the data manifold, thereby mitigating the curse of dimensionality; however, when the output has fewer dimensions, as in the case of discriminative modeling, this problem is less pronounced. That said, one question that might arise is that these explanations delivered so far justify adopting activation noise during the training as an regularization scheme; how activation noise during inference comes to play and interact with the activation noise during training? When we make the manifold of our dataset D more continuous and smooth during training by augmenting it with D′, we are instructing the neural network to learn an enhanced manifold instead of the original one: a new one that is smoother, more continuous, and stretched. During inference we compute the likelihood of a given sample x under our model; if the noise scalar is considerably larger than that of training time, it causes to stretch the cloud of our sample points; then clearly the manifold that is learned by neural networks has not seen the far points of the cloud and included them in itself during training; hence, the neural network would produce unreliable noisy likelihoods for these outlier points of the cloud. This is why the performance severely deteriorates when the noise in inference has a larger scalar than that of training (even worse than no sampling/noise). In other words, larger noise scalar α at inference causes the cloud of noise samples fall outside of the convex hull of the dataset on which the neural network is trained. Eventually, as we presented in Theorem 1, the sampling cloud must be of the same shape/distribution as the training cloud of noise that is used to smooth the dataset D.
1. What is the focus of the paper regarding EBM-based classification? 2. What are the strengths and weaknesses of the paper, particularly in theory and experimentation? 3. Do you have any concerns or questions about the formulation and objective function used in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions or recommendations for improving the paper or its contributions?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors investigate the effect of activation noise (additive noise applied after the activation function) on an EBM based classification approach. They mathematically describe activation noise during training and inference and come to the conclusion that (i) activation noise can be understood as a generalisation of dropout and (ii) that the loss at inference is minimised when the distribution of noise is used during training and inference. In their experiments, the authors demonstrate the latter point (ii) empirically and compare the effect of different activation noise distributions on classification accuracy. They find that symmetric noise distributions are to be preferred. Strengths And Weaknesses Strengths: I believe the topic is important and the community can benefit from papers taking a theoretical look at regularisation methods like this. The paper is well written and included some interesting insights. I appreciate the experiments done by the authors, especially that the benefits of noise during inference are investigated. Weaknesses: Unfortunately, some of the theory is not completely clear to me and I am not sure I agree. See below. The authors say that that dropout is a special case, but they do not include it in their experiments. This is unfortunate, because dropout is so widely used I think the authors miss a point about the connection between dropout and activation noise. Detailed comments: Most importantly, I think the formulation in Eq. 4 might be problematic. The authors give their objective for jointly optimise the network parameters theta and noise distribution q(z) to maximise the conditional log likelihood of their training data drawn form the distribution p_D(x,y). However, their model p_theta(x|y,z) is conditioned on the noise z instead of integrating over it. This objective essentially tries to make p_theta(x|y,z) approximate p_D(x|y) (which could be derived from p_D(x,y)) for each z sampled form q(z). Considering now that p_D(x|y) does not depend on z, the optimal solution for this objective would be to make q(z) infinitley concentrated on 0, as z cannot contribute to better approximate p_D(x|y), right? Later in Eq. 9, when the authors discuss inference, they use a different formulation, integrating the energy over the noise and then choosing the class with the lowest integrated energy. Here, the noise becomes part of the model. I think this approach is more sensible, but it is in conflict with Eq. 4, where we are integrating the log probabilities. This is different, because of the partition function, right? Regarding viewing dropout as special case of activation noise, I agree with this perspective. However, in contrast to the perspective taken by the authors, the activation noise distribution that would result in dropout behaviour, would have to depend on the input x, as it has to (with a certain probability) exactly cancel out the activation of the neuron. The types of activation noise considered in the paper generally don't depend on x. Does the finding that we should use noise during inference hold for dropout? Clarity, Quality, Novelty And Reproducibility The paper is clearly written. The paper is novel as far as I can tell. I believe the paper is reproducible.
ICLR
Title ON INJECTING NOISE DURING INFERENCE Abstract We study activation noise in a generative energy-based modeling setting during training for the purpose of regularization. We prove that activation noise is a general form of dropout. Then, we analyze the role of activation noise at inference time and demonstrate it to be utilizing sampling. Thanks to the activation noise we observe about 200% improvement in performance (classification accuracy). Later, we not only discover, but also prove that the best performance is achieved when the activation noise follows the same distribution both during training and inference. To explicate this phenomenon, we provide theoretical results that illuminate the roles of activation noise during training, inference, and their mutual influence on the performance. To further confirm our theoretical results, we conduct experiments for five datasets and seven distributions of activation noise. 1 INTRODUCTION Whether it is for performing regularization (Moradi et al., 2020) to mitigate overfitting (Kukacka et al., 2017) or for ameliorating the saturation behavior of the activation functions, thereby aiding the optimization procedure (Gulcehre et al., 2016), injecting noise to the activation function of neural networks has been shown effective (Xu et al., 2012). Such activation noise denoted by z, is added to the output of each neuron of the network (Tian & Zhang, 2022) as follows: t = s+ z = f( ∑ i wixi + b) + αz̄ (1) where wi, xi, b, s, f(·), α, z̄, and t stand for the ith element of weights, the ith element of input signal, the bias, the raw/un-noisy activation signal, the activation function, the noise scalar, the normalized noise (divorced from its scalar α) originating from any distribution, and the noisy output, respectively. Studying this setting inflicted by noisy units is of significance, because it resembles how neurons of the brain in the presence of noise learn and perform inference (Wu et al., 2001). In the literature, training with input/activation noise has been shown to be equivalent to loss regularization: a well-studied regularization scheme in which an extra penalty term is appended to the loss function. Also, injecting noise has been shown to keep the weights of the neural network small, which is reminiscent of other practices of regularization that directly limit the range of weights (Bishop, 1995). Furthermore, injecting noise to input samples (or activation functions) is an instance of data augmentation (Goodfellow et al., 2016). Injecting noise practically expands the size of the training dataset, because each time training samples are exposed to the model, random noise is added to the input/latent variables rendering them different every time they are fed to the model. Noisy samples therefore can be deemed as new samples which are drawn from the domain in the vicinity of the known samples: they make the structure of the input space smooth, thereby mitigating the curse of dimensionality and its consequent patchiness/sparsity of the datasets. This smoothing makes it easier for the neural network to learn the mapping function (Vincent et al., 2010). In the existing works, however, the impacts of activation noise have been neither fully understood during training time nor broached at the inference time, not to mention the lack of study on the relationship between having activation noise at training and inference, especially for the generative energy-based modeling (EBM). In this paper, we study those issues: for the EBM setting, for the first time, we study the empirical and theoretical aspects of activation noise not only during training time but also at inference and discuss how these two roles relate to each other. We prove that, during training, activation noise (Gulcehre et al., 2016) is a general form of dropout (Srivastava et al., 2014). This is interesting because dropout has been widely adopted as a regularization scheme. We then formulate and discuss the relationship between activation noise and two other key regularization schemes: loss regularization and data augmentation. We also prove that, during inference, adopting activation noise can be interpreted as sampling the neural network. Accordingly, with activation noise during inference, we estimate the energy of the EBM. Surprisingly, we discover that there is a very strong interrelation between the distribution of activation noise during training and inference: the performance is optimized when those two follow the same distributions. We also prove how to find the distribution of the noise during inference to minimize the inference error, thereby maximizing the performance as high as 200%. Overall, our main contributions in this paper are as follows: • We prove that, during training, activation noise is a general form of dropout. Afterward, we establish the connections between activation noise and loss regularization/data augmentation. With activation noise during inference as well as training, we observe about 200% improvement in performance (classification accuracy), which is unprecedented. Also, we discover/prove that the performance is maximized when the noise in activation functions follow the same distribution during both training and inference. • To explain this phenomenon, we provide theoretical results that illuminate the two strikingly distinct roles of activation noise during training and inference. We later discuss their mutual influence on the performance. To examine our theoretical results, we provide extensive experiments for five datasets, many noise distributions, various noise values for the noise scalar α, and different number of samples. 2 RELATED WORKS Our study touches upon multiple domains: (i) neuroscience, (ii) regularization in machine learning, (iii) generative energy-based modeling, and (iv) anomaly detection and one-class classification. (i) Studying the impact of noise in artificial neural networks (ANNs) can aid neuroscience to understand the brain’s operation (Lindsay, 2020; Richards et al., 2019). From neuroscience, we know that neurons of the brain (as formulated by Eq. 1) never produce the same output twice even when the same stimuli are presented because of their internal noisy biological processes (Ruda et al., 2020; Wu et al., 2001; Romo et al., 2003). Having a noisy population of neurons if anything seems like a disadvantage (Averbeck et al., 2006; Abbott & Dayan, 1999); then, how does the brain thwart the inevitable and omnipresent noise (Dan et al., 1998)? We provide new results on top of current evidence that noise can indeed enhance both the training (via regularization) and inference (by error minimization) (Zylberberg et al., 2016; Zohary et al., 1994). (ii) Injecting noise to neural networks is known to be a regularization scheme: regularization is broadly defined as any modification made to a learning algorithm that is intended to mitigate overfitting: reducing the generalization error but not its training error (Kukacka et al., 2017). Regularization schemes often seek to reduce overfitting (reduce generalization error) by keeping weights of neural networks small (Xu et al., 2012). Hence, the simplest and most common regularization is to append a penalty to the loss function which increases in proportion to the size of the weights of the model. However, regularization schemes are diverse (Moradi et al., 2020); in the following, we review the popular regularization schemes: weight regularization (weight decay) (Gitman & Ginsburg, 2017) penalizes the model during training based on the magnitude of the weights (Van Laarhoven, 2017). This encourages the model to map the inputs to the outputs of the training dataset such that the weights of the model are kept small (Salimans & Kingma, 2016). Batch-normalization regularizes the network by reducing the internal covariate shift: it scales the output of the layer, by standardizing the activations of each input variable per mini-batch (Ioffe & Szegedy, 2015). Ensemble learning (Zhou, 2021) trains multiple models (with heterogeneous architectures) and averages the predictions of all of them (Breiman, 1996). Activity regularization (Kilinc & Uysal, 2018b) penalizes the model during training based on the magnitude of the activations (Deng et al., 2019; Kilinc & Uysal, 2018a). Weight constraint limits the magnitude of weights to be within a range (Srebro & Shraibman, 2005). Dropout (Srivastava et al., 2014) probabilistically removes inputs during training: dropout relies on the rationale of ensemble learning that trains multiple models. However, training and maintaining multiple models in parallel inflicts heavy computational/memory expenses. Alternatively, dropout proposes that a single model can be leveraged to simulate training an expo- nential number of different network architectures concurrently by randomly dropping out nodes during training (Goodfellow et al., 2016). Early stopping (Yao et al., 2007) monitors the model’s performance on a validation set and stops training when performance starts to degrade (Goodfellow et al., 2016). Data augmentation, arguably the best regularization scheme, creates fake data and augments the training set (Hernandez-Garcia & Konig, 2018). Label smoothing (Lukasik et al., 2020) is commonly used in training deep learning (DL) models, where one-hot training labels are mixed with uniform label vectors (Meister et al., 2020). Smoothing (Xu et al., 2020) has been shown to improve both predictive performance and model calibration (Li et al., 2020b; Yuan et al., 2020). Noise schemes inject (usually Gaussian) noise to various components of the machine learning (ML) systems: activations, weights, gradients, and outputs (targets/labels) (Poole et al., 2014). In that, noise schemes provide a more generic and therefore more applicable approach to regularization that is invariant to the architectures, losses, activations of the ML systems, and even the type of problem at hand to be addressed (Holmstrom & Koistinen, 1992). As such, noise has been shown effective for generalization as well as robustness of a variety of ML systems (Neelakantan et al., 2015). (iii) Our simulation setting in this paper follows that of Generative EBM (we briefly say EBM henceforth) (LeCun et al., 2006). EBM (Nijkamp et al., 2020) is a class of maximum likelihood model that maps each input to an un-normalized scalar value named energy. EBM is a powerful model that has been applied to many different domains, such as structured prediction (Belanger & McCallum, 2016), machine translation (Tu et al., 2020), text generation (Deng et al., 2020), reinforcement learning (Haarnoja et al., 2017), image generation (Xie et al., 2016), memory modeling (Bartunov et al., 2019), classification (Grathwohl et al., 2019), continual learning (Li et al., 2020a), and biologicallyplausible training (Scellier & Bengio, 2017). (iv) Our EBM setting leverages separate autoencoders (Chen et al., 2018) for classification, in that it resembles anomaly detection scenarios (Zhou & Paffenroth, 2017; An & Cho, 2015) and also one-class classification (Ruff et al., 2018; Liznerski et al., 2020; Perera & Patel, 2019; Sohn et al., 2020). 3 EBM AND ACTIVATION NOISE DURING TRAINING AND INFERENCE EBM is a class of maximum likelihood model that determines the likelihood of a data point x ∈ X ⊆ RD using the Boltzmann distribution: pθ(x) = exp(−Eθ(x)) Ω(θ) , Ω(θ) = ∫ x∈X exp(−Eθ(x))dx (2) where Eθ(x) : RD → R, known as the energy function, is a neural network parameterized by θ, that maps each data point x to a scalar energy value, and Ω(θ) is the partition function. To solve the classification task in EBM utilizing activation noise, we adjust the general formulation of EBM in Eq. 2 as follows: given a class label y in a discrete set Y and the activation noise z during training, for each input x, we use the Boltzmann distribution to define the conditional likelihood as follows: pθ(x | y,z) = exp (−Eθ(x | y,z)) Ω(θ | y,z) , Ω(θ | y,z) = ∫ x∈X exp (−Eθ (x | y,z)) dx (3) where Eθ(x | y,z) : (RD,N,RF ) → R is the energy function that maps an input given a label and noise to a scalar energy value Eθ(x | y,z), and Ω(θ | y,z) is the normalization function. 3.1 TRAINING WITH ACTIVATION NOISE During training, we want the distribution defined by Eθ to model the data distribution pD(x, y), which we achieve by minimizing the negative log likelihood LML(θ, q(z)) of the data as follows: (θ∗, q∗(z)) = argmin θ,q(z) LML(θ, q(z)), LML(θ, q(z)) = E(x,y)∼pD; z∼q(z) [− log pθ(x | y,z)] (4) where q(z) is the distribution of the activation noise z during training. We explicate the relationships between activation noise during training and (i) dropout, (ii) loss regularization, and (iii) data augmentation. We start by presenting theoretical results illuminating that activation noise is a general form of dropout. For that, we define the negate distribution of a primary distribution, based on which we derive Activation Noise Generality Proposition, stating that dropout is a special case of activation noise. Definition 1 (Negate Random Variable): We define random variable W : Σ → E as the negate of random variable X : ∆ → F , denoted by X ∦ W (where Σ,∆,E,F ⊂ R) if the outcome of W negates the outcome of X; mathematically speaking, x+ w = 0. Proposition 1 (Activation Noise Generality Proposition): When the noise at a given neuron comes from the negate random variable Z with respect to the signal S, i.e., Z ∦ S, the activity noise drops out (the signal of) that neuron. Specifically, the summation of signal and noise becomes zero, s+ z = 0 for all outcomes. Proof: The proof follows from the definition of the negate random variable. This theoretical result implies that activation noise can be considered as a general form of dropout: Fig. 1 visualizes how the activation noise reduces to dropout. In Fig. 2 we compare the performances of activation noise with dropout with the simulation setting as for Fig. 3. Now we explain how activation noise relates to loss regularization and data augmentation. To that end, we consider the EBM setting leveraging multiple distinct autoencoders for classification: one autoencoder is used for each class. We first write the empirical error without noise in the form of the mean squared error (MSE) loss function as follows: Isθ = ∑ y∈Y ∫ x ∥fθ(x | y)− x∥2pD(x, y)dx = ∑ y∈Y ∑ k ∫ xk [ fkθ (x | y)− xk ]2 pD(x, y)dxk (5) where fθ(x | y) denotes the model parameterized by θ given label y, and the energy is determined by Eθ(x | y) = ∥fθ(x | y) − x∥2. Meanwhile, fkθ (x | y) and xk refer to the kth element of the output of the model and desired target (i.e., the original input), respectively. With activation noise, however, we have Iθ = ∑ y∈Y ∑ k ∫ z ∫ xk [ fkθ (x | y,z)− xk ]2 pD(x, y)q(z)dxkdz (6) where fkθ (x | y,z) denotes the kth element of the output of the noisy model. The noise z comes from distribution q(z) during training. Expanding the network function fkθ (x | y,z) into the signal response fkθ (x | y) and the noisy response hkθ(x | y,z) to give fkθ (x | y,z) = fkθ (x | y) + hkθ(x | y,z), (7) we can write Eq. 6 as Iθ = Isθ + I z θ , where I z θ encapsulates the noisy portion of the loss given by Izθ = ∑ y∈Y ∑ k ∫ z ∫ xk [ hkθ(x | y,z)2 + 2hkθ(x | y,z)(fkθ (x | y)− xk) ] pD(x, y)q(z)dxkdz. (8) The term Izθ can be viewed as the loss regularizer, specifically Tikhonov regularizer, as has been shown in (Bishop, 1995). In general, not just for MSE as we have done, but for any arbitrary loss function v(fθ(x | y,z),x), we can re-write it as a combination of the losses pertaining to the signal component plus the noise component and then the same result would hold. As suggested before, the second term of error, besides being a regularizer, can also be seen as the loss for an auxiliary dataset that is interleaved with the primary dataset during training. Hence, this way it can be said that the noise augments the dataset (Goodfellow et al., 2016). This section concerned how activation noise relates to dropout, loss regularization, and data augmentation. As alluded to, activation noise during training is beneficial for the performance; but, what if we used activation noise also during inference? We will now answer this question: for EBM, activation noise can be also beneficial in inference. Specifically, we will first present the role of activation noise during inference in the EBM framework and then present the experiments yielding surprising results demonstrating the effectiveness of activation noise during inference. 3.2 INFERENCE WITH ACTIVATION NOISE Consider the inference phase assuming training has been done (i.e., when θ∗ has been determined). Given a test data point x, we estimate the energy of our trained model Eθ∗(x | y,z) with many different noise realizations z following inference noise distribution r(z) which are averaged over to produce the energy. Therefore, the noise distribution r(z) at the inference can be considered as a sampler. Probabilistically speaking, we measure the expectation of the energy Eθ∗(x | y,z) with respect to distribution r(z) as follows: Ēθ∗(x | y) = Ez [Eθ∗(x | y,z)] = ∫ z Eθ∗(x | y,z)r(z)dz. (9) The variance of Eθ∗(x | y,z), is determined by σ2 = ∫ z Eθ∗(x | y,z)2r(z)dz − Ēθ∗(x | y)2. In practice, because the calculation of the integral in Eq. 9 is intractable, we perform the inference via Êθ∗(x | y) = 1n ∑n i=1 Eθ∗(x | y,z(i)) where the integral in Eq. 9 is numerically approximated via sampling from the distribution r(z) which generates the noise samples {z(i)}. Finally, given an input x, the class label predicted by our EBM is the class with the smallest energy at x, we find the target class via ŷ = argmin y′∈Y Êθ∗ (x | y′) . This approach of classification is a common formulation for making inference which is derived from Bayes’ rule. There is one difference, however, and that is in EBM classification we seek the class whose energy is the minimum as the class that the input data belongs to. In the end note that, as discussed in this section, activation noise not only generalizes dropout during training, as a regularization scheme, but also offers the opportunity of sampling the model during inference (to minimize the inference error) possibly using a wide range of noise distributions as the sampler; this is advantageous to dropout that is only applicable during the training. In the next section, we will first present the simulation settings detailing the architectures used to conduct the experiments and then the consequent results. 4 SIMULATION SETTING AND RESULTS 4.1 SIMPLE REPRESENTATIVE SIMULATION RESULTS For the purpose of illustration, we first report only a part of our simulation setting pertaining to only one dataset, a part that is representative of the general theme of our results in this paper. This suffices for initiating the motivation required for further theoretical and empirical studies. In the next subsections, we will provide a comprehensive account of our experiments. In this simulation, we trained 10 autoencoders, one for each class of CIFAR-10 dataset. Our autoencoders incorporate the noise z̄ which is a random variable following the standard Gaussian distribution (zero mean and unit variance) that is multiplied by the scalar α for each neuron of each layer of the model as presented in Eq. 1. We use convolutional neural networks (CNNs) with channel numbers of [30, 73, 100] for the encoder, and the mirror of it for the decoder. The stride is 2, padding is 1, and the window size is 4×4. We used MSE as the loss function although binary cross-entropy (BCE) has also been tried and produced similar results. The number of samples n is set to 1,000. This simulation setting is of interest in incremental learning where only one or a few classes are present at a time (van de Ven et al., 2021). Moreover, doing classification via generative modeling fits in the framework of predictive coding which is a model of visual processing which proposes that the brain possesses a generative model of input signal with prediction loss contributing as a mean for both learning and attention (Keller & Welling, 2021). Fig. 3 shows the performances (classification accuracies) in terms of the scalar α of Gaussian noise during both training and test. The omnipresent activation noise which is present in both training and inference exhibits two surprising behaviors in our simulations: (i) it significantly improves the performance (about 200%), which is unprecedented among similar studies in the literature. (ii) The other interesting observation is that always being on or close to the diagonal leads to the best results. These observations will be discussed. 4.2 THEORIES FOR THE SIMULATION RESULTS We first ask the question that why is noise so effective in improving the performance? For that, we need to reiterate that noise is playing two different roles at (i) training and (ii) inference. For the first role, as alluded to before, what actually activation noise is doing during training is regularization. We suggested that indeed activation noise is a general case of dropout. We later demonstrated that activation noise is a form of loss regularization, and also an instance of data augmentation. All these categories fall under the umbrella of regularization. Meanwhile, we learned that, during the test phase, activation noise does sampling which reduces the inference error. In aggregation, therefore, regularization hand in hand with sampling account for this performance boost (about 200%). The other interesting observation in our simulations is that to attain the highest performance, it is best for the noise scalar α (and in general for the distributions) of training and test noise to be equal. This can be clearly observed in Fig. 3 (and also in all other numerical results that we will present later). We ask the question that why is there such a strong correlation? Shouldn’t the two roles of noise during training and test be mutually independent? Where and how do these two roles link to each other? Upon empirical and theoretical investigations, we concluded that the inter-relation between noise during training and test can be characterized and proved in the following theorem. Theorem 1 (Noise Distribution Must be the Same During Training and Test): Via having the inference noise (i.e., the sampler) r(z) following the same distribution of noise during training q∗(z), i.e., r∗(z) = q∗(z), the loss of the inference is minimized. Proof: During the inference, to optimize the performance of the EBM model, the objective is to find the distribution r∗(z) for the activation noise that minimizes the loss specified as follows: r∗(z) = argmin r(z) LML(θ∗, r(z)), LML(θ∗, r(z)) = E(x,y)∼pD; z∼r(z) [− log pθ∗(x | y,z)] . (10) Meanwhile, from Eq. 4 we know that training is performed to minimize the loss LML(θ∗, q∗(z)) with distribution q∗(z) as the activation noise. Therefore, during inference, if we set the activation noise r∗(z) the same as q∗(z) in Eq. 10, the loss again will be minimal. □ This implies that, during training, the distribution of our trained model is impacted by the distribution of noise q(z) via the second term of loss function in Eqs. 7 where the network function learns the distribution of signal in presence of a certain noise distribution. During inference, when the noise distribution is the same as in the training, the performance is the best, and as soon as there is a discrepancy between the two distributions, the performance degrades. 4.3 COMPREHENSIVE SIMULATION RESULTS FOR GAUSSIAN NOISE We present extensive simulation results for various datasets (using Google Colab Pro+) considering Gaussian activation noise. We used the same architecture as the one presented in Section 4.1 for all of our five datasets: CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), CalTech-101 (Fei-Fei et al., 2004), Flower-102 (Nilsback & Zisserman, 2008), and CUB-200 (Wah et al., 2011). Given that the volume of our experiments is large, for computational tractability we slimmed down the larger datasets from about 100 (or 200) classes to 20 classes. Our comprehensive simulation results explore the performances of our proposed EBM framework across five dimensions: (i) for five datasets we examined the joint impacts of various noise distributions. The noise scalar α varies during both (ii) training and (iii) test time within the range of [0, 2) for 20 values with the step size of 0.1. However, we do not report all of them due to the page limit; instead, we selectively present values that are representative of the two key observations which we intend to highlight: these values are (0.5, 0.5), (0.5, 0.9), (0.5, 1.5), (0.9, 0.5), (0.9, 0.9), (0.9, 1.5), and (1.5, 1.5), where the pair (·, ·) denotes the scalar α of noise at training and test. (iv) The number of samples n is set to 104, except for the cases of no/zero test noise in which n = 1, because when the scalar α of noise during test is 0 (no noise) such as in rows (0.0, 0.0) and (0.5, 0.0), the corresponding accuracy would not vary. Finally, (v) we assess the performances for different noise distributions via exploring all of their permutations. In our simulations we resize the resolution of all images for our datasets to 32×32. The number of epochs is set to 100. For optimizer, Adam is chosen with the default learning rate of 0.001 and default momentum (Kingma & Ba, 2014). The minibatch size is 256. We ran 10 experiments for each reported number in Table 1 and we present the means as well as standard error of the means (SEMs) over these runs. The first observation we intend to emphasize is that this very activation noise can contribute to a considerable improvement in performance as high as 200%. This can be discerned in Table 1 when comparing the row (0.0, 0.0) pertaining to the case with no noise, with the row (0.9, 0.9). When going from the former case to the latter case, the accuracy jumps from 20.48% to 65.88% for CIFAR-10 dataset with n = 10, 000 samples. The same phenomenon can also be observed for all other datasets. Compared with the previous studies in the literature on the effectiveness of noise, this level of performance enhancement that is acquired by injecting a simple noise is unprecedented if not unimaginable. This performance enhancement, as we theorized and proved in Theorem 1, is a result of the combination of both regularization and sampling, not in an independent way, but indeed in a complicated interdependent fashion (see Fig. 3 to discern the interdependency). The second observation signifies the importance of having a balance in injecting noise at training and test: the noise distribution at test r(z) ought to follow the distribution of the noise during training q(z) so that the loss of the inference is minimized as discussed in Theorem 1. This can be noticed when contrasting balanced rows (0.5, 0.5), (0.9, 0.9), and (1.5, 1.5) with the rest which are unbalanced. Interestingly, even the row (0.0, 0.0) which enjoys no noise outperforms unbalanced rows (0.0, 0.5), (0.5, 0.9), (0.5, 1.5), and (0.9, 1.5). The third observation is the asymmetry between having unbalanced noise for training versus test. Too large a scalar α for noise during test has far more negative impact on the performance than otherwise. For example, consider rows (0.5, 0.0) and (0.0, 0.5), the former (large noise scalar α at training) delivers about 100% higher accuracies than the latter which bears high noise scalar α at test. This pattern, with no exception, repeats in all rows where the noise scalar during test is larger than that of the training noise such as (0.0, 0.5), (0.5, 0.9), (0.5, 1.5), and (0.9, 1.5). All these cases yield poorest performances even worse than (0.0, 0.0). This observation (as well as the preceding two observations) are better discerned in Fig. 3 which portrays the heatmap of performance results. 4.4 INVESTIGATING OTHER NOISE DISTRIBUTIONS We now evaluate the performances of alternative noise distributions. We consider both types of distributions, symmetric and asymmetric: (i) Cauchy, (ii) uniform, (iii) Chi, (iv) lognormal, (v) t-Distribution, and (vi) exponential. For noise distributions that are capable of becoming either symmetric or not (e.g., uniform and Cauchy), we explore parameters that keep them symmetric around zero; because based on our experiments (as we will see) we are convinced that symmetric distributions (i.e., symmetric about zero) consistently yield higher performances. For different noise distributions, the underlying rule that justifies the results can be summarized as follows: when the activation noise z is added to the signal s, it is conducive to the performance as far as it does not entirely distort the signal. In other words, the Signal-to-Noise Ratio (SNR) can be beneficially decreased to a threshold that delivers the peak performance; after that the performance begins to degrade. Given that t = s + z = s + αz̄, and s is constant, there are two factors that determine the value of the SNR (given by var[s]/ var[z]): (i) the distribution of z̄, particularly how narrowly (or broadly) the density of the random variable is distributed (i.e., var[z̄]); (ii) how large the scalar of α is. Fig. 4 presents the profile of different probability distributions: note the breadth of various probability distributions as they play a vital role in how well they perform. In the following we discuss our results with respect to the value of the SNR. As we can see in Fig. 4, (i) adopting symmetric probability distributions as activation noise substantially outperforms asymmetric ones: hence, we observe that Cauchy, Gaussian, uniform, and t distribution consistently yield higher accuracies than Chi, exponential, and lognormal. This is because having a symmetric sampling can more effectively explore the learned manifold of the neural network. (ii) The performances of the different noise distributions can be justified with respect to the value of the SNR as follows: as we reduce the SNR via increasing the noise scalar α, the performance rises up to a peak and then falls; this is valid for all the probability distributions. The differences in the slope of various distributions (in Fig. 4 on right) originate from the difference in the profile of the noise distributions: the wider the noise profile is (the larger var[z̄]), the sooner the SNR will drop, and the earlier the pattern of rise and fall will occur. For example, in Fig. 4 because the Gaussian distribution is thinner than the Cauchy, its pattern of rise and fall happens with a small delay since its SNR drops with a latency. 5 CONCLUSION In this paper, specifically for an EBM setting, we studied the impacts of activation noise during training time, broached its role at the inference time, and scrutinized the relationship between activation noise at training and inference. We proved that, during training, activation noise is a general form of dropout; we discussed the relationship between activation noise and loss regularization/data augmentation. We studied the empirical/theoretical aspects of activation noise at inference time and proved that adopting noisy activation functions, during inference, can be interpreted as sampling the model. We proved/demonstrated that when the sampler follows the same distribution as the noise during training, the loss of the inference is minimized and therefore the performance is optimized. To assess our theoretical results, we performed extensive experiments exploring different datasets, noise scalars during training and inference, and different noise distributions. A MORE DETAILS FOR THE PERFORMANCE RESULTS OF VARIOUS NOISE DISTRIBUTIONS In the following we provide the specifications of our probability distributions: as shown in Fig. 5 for Cauchy distribution g(z̄; z0, γ), we set z0, which characterizes the mean of the Cauchy distribution, equal to zero; and explore different values of γ. The same way for uniform distribution, the mean is set to zero: while uniform distribution originally has two parameters characterizing the start and end of the random variable, in our simulation, because the uniform noise is desired to be symmetric, we define only one parameter characterizing the uniform distribution denoted by ω referring to its width. For Chi distribution g(z̄; k) we assess its performance with different values of k, whereas for lognormal distribution g(z̄;µ, σ), the value of µ is set to zero and σ is varied. For t-distribution g(z̄; ν), different values of ν are explored. Worth mentioning that for t-distribution when ν = 1, the distribution reduces to Cauchy distribution. Finally, for exponential distribution g(z̄;λ), we evaluate the performance for different values of λ. Our experiments are performed 10 times via multiple random seeds. We report the means (± SEMs) over these experiments. Overall, based on the performances across all values of the noise scalar α (as shown in Fig. 6), we conclude that the best noise distribution is also the most popular one: Gaussian distribution; perhaps because it has the largest area under the curve for different values of the noise scalar α. Furthermore, it is clear that the distributions that are similar to the Gaussian distribution deliver best performances whereas for dissimilar ones the performance becomes worse. Meanwhile, we realized that various parameters (standard deviations σ) for the Gaussian distribution do not offer convincing improvements and only hasten/delay the occurrence of the rise and fall pattern for classification accuracy. B IMPACT OF THE NUMBER OF SAMPLES In Fig. 7 we demonstrate the impact of the number of samples for the standard Gaussian noise z̄ with different noise scalar values α during training and test. It can be seen that the accuracy rises almost logarithmically as the number of samples increases. Note that these results pertain to CIFAR-10 dataset. In Table 2 we provide the comprehensive form of the Table 1 including the classification accuracies for different number of samples n. C DIFFERENT NOISE DISTRIBUTIONS DURING TRAINING AND INFERENCE We examine the performances pertaining to the combination of nonidentical noise distributions: adopting two different distributions, one for training and the other for inference. The parameters of different noise distributions are outlined in Table 3. The noise scalar α is set to one for training and test. The results of the experiment are displayed in Fig. 8 that demonstrates the performances for each two noise distributions in a 7×7 heatmap grid: the empirical results confirm our theoretical conclusions that was proposed in Theorem 1, proving that the optimal performance during inference is acquired if we have the same noise distributions during training and inference. In Fig. 8 it can be seen that the best results pertain to our four symmetric noise distributions, whereas the worst are for when an asymmetric noise is adopted for training and a symmetric noise for inference (to sample). This result is in accordance with our third observation discussed in Section 4.3 and also further corroborates Theorem 1. In Fig. 8, we can see that if we use symmetric noise during training and asymmetric noise during inference the performance will not be as poor as otherwise. This was also observed and explained in Section 4.4. D LAYER-SELECTIVE NOISE FOR THE EBM SETTING So far, we have injected noise to all the layers of our neural networks. In this section, we study the case where we inject noise only to a portion of layers of the generative EBM model. We design and compare five schemes for noise injection: (i) full noise (the default) where noise is injected to all layers (denoted by n-full); (ii) and (iii) odd/even noise where noise is injected to all odd/even layers (denoted by n-odd and n-even); (iv) and (v) encoder/decoder noise where noise is injected to encoder/decoder layers (denoted by n-enc and n-dec). Worth mentioning that we used standard Gaussian noise with scalar α = 1 pertaining to the peak performance for both training and test. Fig. 9 presents the accuracies for our neural network injected with all the noise schemes presented above. As it can be seen, injecting noise to all layers yields higher performances than the alternatives; n-even and n-odd are at the next rank while n-enc and n-dec are the worse. Fig. 9 also jointly investigates the impact of injecting noise to different layers of neural network at both training and inference. We observe the results akin to the previous results in Fig. 3 once again but for each layer: according to our results the noise injected to a specific layer during training and inference better to follow the same distribution. For example, when noise is injected to even layers (n-even) of the neural network during training, none of the other noise injection schemes during inference could exceed the performance of n-even noise injection. Also, not injecting noise during training for a specific layer while injecting noise during inference for that layer significantly reduces the performance. Accordingly, inspired by above results we can propose Corollary 1 that generalizes Theorem 1. Corollary 1 (General Noise Distribution for the Sampler During Test): Via having the sampler r(z, v) following both the same placement and distribution of noise during training q∗(z, v), i.e., r∗(z, v) = q∗(z, v), where v encodes the placement of noise among the layers, the loss of the inference is minimized. Clearly, when the placement parameter v is selected such that all layers are included for noise injection, Corollary 1 reduces to Theorem 1. E ACTIVATION NOISE IN DISCRIMINATIVE MODELING So far, we have considered only the generative EBM modeling. In this section, we examine the impact of activation noise on the performance of classifiers relying on discriminative modeling: we conduct our study with two popular datasets, CIFAR-10 and CIFAR-100, on seventeen architectures as listed in Table 4. In this experiment, we train for 30 epochs, the optimizer was Adam with learning rate 0.001 and the default momentum, and the learning rate scheduler is cosine annealing similarity. Fig. 10 demonstrates the results of our experiments: as we can see in Fig. 10 exhibiting the results of the CIFAR-10 dataset for different architectures, the performances reliably deteriorate as we increase the scalar of the omnipresent (standard Gaussian) activation noise even with balanced noise enjoying n = 1000 samples. Meanwhile, in Table 5, we report the performances of all architectures for both CIFAR-10 and CIFAR-100 with three noise levels: (i) no noise α = 0.0, (ii) mild noise α = 0.5, and (iii) strong noise α = 1.0. Based on our observations, we are convinced that adding even a mild level of balanced noise is detrimental to the performance of the discriminative modeling scenario for different architectures and datasets. To also investigate the scenario for unbalanced noise injection, we provide Fig. 11 that shows the performance heatmap of the discriminative modeling classifier using ResNet50 architecture on CIFAR-10 dataset as we increase the noise scalar α during both the training and inference. We can discern that in the unbalanced noise case, the performance drop is more severe than the balanced case. Moreover, we made three observations in Fig. 11 that are noteworthy; these observations are reminiscent of the observations that were made in Section 4. (i) In Fig. 11, the diagonal represents the balanced noise injection. It can be seen that the performance consistently deteriorates when moving from top-left entry (no noise) to bottom-right entry (intense noise); if we recall, this is exactly the opposite of the observation made in Fig. 3: there, injecting balanced noise reliably and significantly increased the performance. (ii) We observe that in Fig. 11 unbalanced noise worsens the performance which is in accordance with the observation in Fig. 3. Finally, (iii) just as in Fig. 3, there is an asymmetry in performance for having an unbalanced noise: strong noise during inference with weak noise during training delivers noticeably worse performances than otherwise. Our results for CIFAR-10 dataset in Figs. 10 and 11 carry over to CIFAR-100 dataset; hence, we refrain from reporting the results pertaining to CIFAR-100 in separate figures. From the results that have been reported in Figs. 10 and 11, and also in Table 5, it becomes clear that, in contrast to generative modeling, in discriminative modeling, the activation noise (injected to all-layers in both training and inference) indeed worsens the performance. F ON WHY ACTIVATION NOISE IS EFFECTIVE FOR GENERATIVE MODELING BUT NOT DISCRIMINATIVE MODELING We do not have any rigorous mathematical proof showing why activation noise is effective for generative modeling but not discriminative modeling. However, we can conjecture the reason. To this end, we need to take a step back and ponder about the function of neural networks. As we know, in the broad mathematical sense, neural networks are best understood as approximating often a nonlinear target function j(·) that maps input variable X to an output variable Y , where X and Y come from training data D. Specifically, the aim is learning the target function j(·) from training data D via having j(·) to perform curve-fitting on the training data D. For both the discriminative and generative modeling scenario, the dimensions of the input space X is the same; however, when it comes to the output space Y the difference emerges: in a discriminative modeling setting, the output space has far fewer dimensions than that of generative modeling. For example, in the CIFAR-10 dataset, the output space Y for the generative model has 32×32=1024 dimensions whereas the discriminative model has only 10 dimensions. Meanwhile, we know that as soon as the number of dimensions grows, the curse of dimensionality comes to play: our training data D now becomes exponentially insufficient; the data becomes sparsely distributed in the space, and instead of having a smooth continuous manifold of the data which is desired, we will have patches of data scattered in the space. This makes it hard for our j(·) to map the input to the output because neural networks, as hinted in Universal Approximation Theorem (Hornik et al., 1989), in the strict sense cannot, and in the practical sense, severely suffer when approximating functions whose samples in the input space are not smooth. The activation noise, as proved in the main text where we linked activation noise with data augmentation, imputes the primary dataset D with an auxiliary dataset D′ to smooth the data manifold, thereby mitigating the curse of dimensionality; however, when the output has fewer dimensions, as in the case of discriminative modeling, this problem is less pronounced. That said, one question that might arise is that these explanations delivered so far justify adopting activation noise during the training as an regularization scheme; how activation noise during inference comes to play and interact with the activation noise during training? When we make the manifold of our dataset D more continuous and smooth during training by augmenting it with D′, we are instructing the neural network to learn an enhanced manifold instead of the original one: a new one that is smoother, more continuous, and stretched. During inference we compute the likelihood of a given sample x under our model; if the noise scalar is considerably larger than that of training time, it causes to stretch the cloud of our sample points; then clearly the manifold that is learned by neural networks has not seen the far points of the cloud and included them in itself during training; hence, the neural network would produce unreliable noisy likelihoods for these outlier points of the cloud. This is why the performance severely deteriorates when the noise in inference has a larger scalar than that of training (even worse than no sampling/noise). In other words, larger noise scalar α at inference causes the cloud of noise samples fall outside of the convex hull of the dataset on which the neural network is trained. Eventually, as we presented in Theorem 1, the sampling cloud must be of the same shape/distribution as the training cloud of noise that is used to smooth the dataset D.
1. What is the focus and contribution of the paper regarding EBM in classification tasks? 2. What are the strengths and weaknesses of the proposed method, particularly in its mathematical formulation and experimental presentation? 3. Do you have any questions or concerns about the methodology used in the paper, such as the application of Bayes' rule or the decomposition of x with independent x_k? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any interesting applications of the proposed method that the reviewer would like to explore further?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors propose a method to add noise in EBM in classification tasks. The likelihood and loss functions are derived based on the injected noise z . The paper claims adding noise during training includes dropout as a special case, and propose to add the same amount of noise during inference. On image classification tasks, the paper shows increased accuracy when adding the proper amount of noise. Strengths And Weaknesses Strengths: The paper has a good overview of literature. Adding noise to EBM seems to be a novel approach. Experiments show huge improvement on classification results. Weaknesses: The title is too big and does not match the paper. Noise is added in both training and inference. Also, since the paper focuses on EBM applied to classification tasks, the title must reflect that. The presentation of methodology is confusing. It would be nice to have a detailed paragraph on how to mathematically perform classification with EBM (or ten AEs for these classes). I notice that you perform arg ⁡ min y E ( x | y ) in the paragraph after eq(10). Is it just intuition or derived from Bayes rule applied to p ( y | x ) ? Def 1 and Prop 1 seems straightforward and do not convey much insights. In practice we do not expect the dropout case to happen. What theory do you have for other types of noises? Eq (5): why use MSE for energy? Why can you decompose x with independent x k ? Thm 1 should be in methodology section rather than experiment section. A theorem concluding empirical findings is much weaker than one that predicts. In addition, this theorem seems straightforward and I do not find anything interesting. The experiments do not show interesting results. How is the task different from standard classification tasks? If not, why are we interested in this approach, instead of just using standard methods? What are some interesting applications of the proposed method? Clarity, Quality, Novelty And Reproducibility The paper has some clarity and quality concerns based on the comments above. There is some novelty in the sense that adding noise to EBM is a new idea. The paper has code in the supplementary file.
ICLR
Title ON INJECTING NOISE DURING INFERENCE Abstract We study activation noise in a generative energy-based modeling setting during training for the purpose of regularization. We prove that activation noise is a general form of dropout. Then, we analyze the role of activation noise at inference time and demonstrate it to be utilizing sampling. Thanks to the activation noise we observe about 200% improvement in performance (classification accuracy). Later, we not only discover, but also prove that the best performance is achieved when the activation noise follows the same distribution both during training and inference. To explicate this phenomenon, we provide theoretical results that illuminate the roles of activation noise during training, inference, and their mutual influence on the performance. To further confirm our theoretical results, we conduct experiments for five datasets and seven distributions of activation noise. 1 INTRODUCTION Whether it is for performing regularization (Moradi et al., 2020) to mitigate overfitting (Kukacka et al., 2017) or for ameliorating the saturation behavior of the activation functions, thereby aiding the optimization procedure (Gulcehre et al., 2016), injecting noise to the activation function of neural networks has been shown effective (Xu et al., 2012). Such activation noise denoted by z, is added to the output of each neuron of the network (Tian & Zhang, 2022) as follows: t = s+ z = f( ∑ i wixi + b) + αz̄ (1) where wi, xi, b, s, f(·), α, z̄, and t stand for the ith element of weights, the ith element of input signal, the bias, the raw/un-noisy activation signal, the activation function, the noise scalar, the normalized noise (divorced from its scalar α) originating from any distribution, and the noisy output, respectively. Studying this setting inflicted by noisy units is of significance, because it resembles how neurons of the brain in the presence of noise learn and perform inference (Wu et al., 2001). In the literature, training with input/activation noise has been shown to be equivalent to loss regularization: a well-studied regularization scheme in which an extra penalty term is appended to the loss function. Also, injecting noise has been shown to keep the weights of the neural network small, which is reminiscent of other practices of regularization that directly limit the range of weights (Bishop, 1995). Furthermore, injecting noise to input samples (or activation functions) is an instance of data augmentation (Goodfellow et al., 2016). Injecting noise practically expands the size of the training dataset, because each time training samples are exposed to the model, random noise is added to the input/latent variables rendering them different every time they are fed to the model. Noisy samples therefore can be deemed as new samples which are drawn from the domain in the vicinity of the known samples: they make the structure of the input space smooth, thereby mitigating the curse of dimensionality and its consequent patchiness/sparsity of the datasets. This smoothing makes it easier for the neural network to learn the mapping function (Vincent et al., 2010). In the existing works, however, the impacts of activation noise have been neither fully understood during training time nor broached at the inference time, not to mention the lack of study on the relationship between having activation noise at training and inference, especially for the generative energy-based modeling (EBM). In this paper, we study those issues: for the EBM setting, for the first time, we study the empirical and theoretical aspects of activation noise not only during training time but also at inference and discuss how these two roles relate to each other. We prove that, during training, activation noise (Gulcehre et al., 2016) is a general form of dropout (Srivastava et al., 2014). This is interesting because dropout has been widely adopted as a regularization scheme. We then formulate and discuss the relationship between activation noise and two other key regularization schemes: loss regularization and data augmentation. We also prove that, during inference, adopting activation noise can be interpreted as sampling the neural network. Accordingly, with activation noise during inference, we estimate the energy of the EBM. Surprisingly, we discover that there is a very strong interrelation between the distribution of activation noise during training and inference: the performance is optimized when those two follow the same distributions. We also prove how to find the distribution of the noise during inference to minimize the inference error, thereby maximizing the performance as high as 200%. Overall, our main contributions in this paper are as follows: • We prove that, during training, activation noise is a general form of dropout. Afterward, we establish the connections between activation noise and loss regularization/data augmentation. With activation noise during inference as well as training, we observe about 200% improvement in performance (classification accuracy), which is unprecedented. Also, we discover/prove that the performance is maximized when the noise in activation functions follow the same distribution during both training and inference. • To explain this phenomenon, we provide theoretical results that illuminate the two strikingly distinct roles of activation noise during training and inference. We later discuss their mutual influence on the performance. To examine our theoretical results, we provide extensive experiments for five datasets, many noise distributions, various noise values for the noise scalar α, and different number of samples. 2 RELATED WORKS Our study touches upon multiple domains: (i) neuroscience, (ii) regularization in machine learning, (iii) generative energy-based modeling, and (iv) anomaly detection and one-class classification. (i) Studying the impact of noise in artificial neural networks (ANNs) can aid neuroscience to understand the brain’s operation (Lindsay, 2020; Richards et al., 2019). From neuroscience, we know that neurons of the brain (as formulated by Eq. 1) never produce the same output twice even when the same stimuli are presented because of their internal noisy biological processes (Ruda et al., 2020; Wu et al., 2001; Romo et al., 2003). Having a noisy population of neurons if anything seems like a disadvantage (Averbeck et al., 2006; Abbott & Dayan, 1999); then, how does the brain thwart the inevitable and omnipresent noise (Dan et al., 1998)? We provide new results on top of current evidence that noise can indeed enhance both the training (via regularization) and inference (by error minimization) (Zylberberg et al., 2016; Zohary et al., 1994). (ii) Injecting noise to neural networks is known to be a regularization scheme: regularization is broadly defined as any modification made to a learning algorithm that is intended to mitigate overfitting: reducing the generalization error but not its training error (Kukacka et al., 2017). Regularization schemes often seek to reduce overfitting (reduce generalization error) by keeping weights of neural networks small (Xu et al., 2012). Hence, the simplest and most common regularization is to append a penalty to the loss function which increases in proportion to the size of the weights of the model. However, regularization schemes are diverse (Moradi et al., 2020); in the following, we review the popular regularization schemes: weight regularization (weight decay) (Gitman & Ginsburg, 2017) penalizes the model during training based on the magnitude of the weights (Van Laarhoven, 2017). This encourages the model to map the inputs to the outputs of the training dataset such that the weights of the model are kept small (Salimans & Kingma, 2016). Batch-normalization regularizes the network by reducing the internal covariate shift: it scales the output of the layer, by standardizing the activations of each input variable per mini-batch (Ioffe & Szegedy, 2015). Ensemble learning (Zhou, 2021) trains multiple models (with heterogeneous architectures) and averages the predictions of all of them (Breiman, 1996). Activity regularization (Kilinc & Uysal, 2018b) penalizes the model during training based on the magnitude of the activations (Deng et al., 2019; Kilinc & Uysal, 2018a). Weight constraint limits the magnitude of weights to be within a range (Srebro & Shraibman, 2005). Dropout (Srivastava et al., 2014) probabilistically removes inputs during training: dropout relies on the rationale of ensemble learning that trains multiple models. However, training and maintaining multiple models in parallel inflicts heavy computational/memory expenses. Alternatively, dropout proposes that a single model can be leveraged to simulate training an expo- nential number of different network architectures concurrently by randomly dropping out nodes during training (Goodfellow et al., 2016). Early stopping (Yao et al., 2007) monitors the model’s performance on a validation set and stops training when performance starts to degrade (Goodfellow et al., 2016). Data augmentation, arguably the best regularization scheme, creates fake data and augments the training set (Hernandez-Garcia & Konig, 2018). Label smoothing (Lukasik et al., 2020) is commonly used in training deep learning (DL) models, where one-hot training labels are mixed with uniform label vectors (Meister et al., 2020). Smoothing (Xu et al., 2020) has been shown to improve both predictive performance and model calibration (Li et al., 2020b; Yuan et al., 2020). Noise schemes inject (usually Gaussian) noise to various components of the machine learning (ML) systems: activations, weights, gradients, and outputs (targets/labels) (Poole et al., 2014). In that, noise schemes provide a more generic and therefore more applicable approach to regularization that is invariant to the architectures, losses, activations of the ML systems, and even the type of problem at hand to be addressed (Holmstrom & Koistinen, 1992). As such, noise has been shown effective for generalization as well as robustness of a variety of ML systems (Neelakantan et al., 2015). (iii) Our simulation setting in this paper follows that of Generative EBM (we briefly say EBM henceforth) (LeCun et al., 2006). EBM (Nijkamp et al., 2020) is a class of maximum likelihood model that maps each input to an un-normalized scalar value named energy. EBM is a powerful model that has been applied to many different domains, such as structured prediction (Belanger & McCallum, 2016), machine translation (Tu et al., 2020), text generation (Deng et al., 2020), reinforcement learning (Haarnoja et al., 2017), image generation (Xie et al., 2016), memory modeling (Bartunov et al., 2019), classification (Grathwohl et al., 2019), continual learning (Li et al., 2020a), and biologicallyplausible training (Scellier & Bengio, 2017). (iv) Our EBM setting leverages separate autoencoders (Chen et al., 2018) for classification, in that it resembles anomaly detection scenarios (Zhou & Paffenroth, 2017; An & Cho, 2015) and also one-class classification (Ruff et al., 2018; Liznerski et al., 2020; Perera & Patel, 2019; Sohn et al., 2020). 3 EBM AND ACTIVATION NOISE DURING TRAINING AND INFERENCE EBM is a class of maximum likelihood model that determines the likelihood of a data point x ∈ X ⊆ RD using the Boltzmann distribution: pθ(x) = exp(−Eθ(x)) Ω(θ) , Ω(θ) = ∫ x∈X exp(−Eθ(x))dx (2) where Eθ(x) : RD → R, known as the energy function, is a neural network parameterized by θ, that maps each data point x to a scalar energy value, and Ω(θ) is the partition function. To solve the classification task in EBM utilizing activation noise, we adjust the general formulation of EBM in Eq. 2 as follows: given a class label y in a discrete set Y and the activation noise z during training, for each input x, we use the Boltzmann distribution to define the conditional likelihood as follows: pθ(x | y,z) = exp (−Eθ(x | y,z)) Ω(θ | y,z) , Ω(θ | y,z) = ∫ x∈X exp (−Eθ (x | y,z)) dx (3) where Eθ(x | y,z) : (RD,N,RF ) → R is the energy function that maps an input given a label and noise to a scalar energy value Eθ(x | y,z), and Ω(θ | y,z) is the normalization function. 3.1 TRAINING WITH ACTIVATION NOISE During training, we want the distribution defined by Eθ to model the data distribution pD(x, y), which we achieve by minimizing the negative log likelihood LML(θ, q(z)) of the data as follows: (θ∗, q∗(z)) = argmin θ,q(z) LML(θ, q(z)), LML(θ, q(z)) = E(x,y)∼pD; z∼q(z) [− log pθ(x | y,z)] (4) where q(z) is the distribution of the activation noise z during training. We explicate the relationships between activation noise during training and (i) dropout, (ii) loss regularization, and (iii) data augmentation. We start by presenting theoretical results illuminating that activation noise is a general form of dropout. For that, we define the negate distribution of a primary distribution, based on which we derive Activation Noise Generality Proposition, stating that dropout is a special case of activation noise. Definition 1 (Negate Random Variable): We define random variable W : Σ → E as the negate of random variable X : ∆ → F , denoted by X ∦ W (where Σ,∆,E,F ⊂ R) if the outcome of W negates the outcome of X; mathematically speaking, x+ w = 0. Proposition 1 (Activation Noise Generality Proposition): When the noise at a given neuron comes from the negate random variable Z with respect to the signal S, i.e., Z ∦ S, the activity noise drops out (the signal of) that neuron. Specifically, the summation of signal and noise becomes zero, s+ z = 0 for all outcomes. Proof: The proof follows from the definition of the negate random variable. This theoretical result implies that activation noise can be considered as a general form of dropout: Fig. 1 visualizes how the activation noise reduces to dropout. In Fig. 2 we compare the performances of activation noise with dropout with the simulation setting as for Fig. 3. Now we explain how activation noise relates to loss regularization and data augmentation. To that end, we consider the EBM setting leveraging multiple distinct autoencoders for classification: one autoencoder is used for each class. We first write the empirical error without noise in the form of the mean squared error (MSE) loss function as follows: Isθ = ∑ y∈Y ∫ x ∥fθ(x | y)− x∥2pD(x, y)dx = ∑ y∈Y ∑ k ∫ xk [ fkθ (x | y)− xk ]2 pD(x, y)dxk (5) where fθ(x | y) denotes the model parameterized by θ given label y, and the energy is determined by Eθ(x | y) = ∥fθ(x | y) − x∥2. Meanwhile, fkθ (x | y) and xk refer to the kth element of the output of the model and desired target (i.e., the original input), respectively. With activation noise, however, we have Iθ = ∑ y∈Y ∑ k ∫ z ∫ xk [ fkθ (x | y,z)− xk ]2 pD(x, y)q(z)dxkdz (6) where fkθ (x | y,z) denotes the kth element of the output of the noisy model. The noise z comes from distribution q(z) during training. Expanding the network function fkθ (x | y,z) into the signal response fkθ (x | y) and the noisy response hkθ(x | y,z) to give fkθ (x | y,z) = fkθ (x | y) + hkθ(x | y,z), (7) we can write Eq. 6 as Iθ = Isθ + I z θ , where I z θ encapsulates the noisy portion of the loss given by Izθ = ∑ y∈Y ∑ k ∫ z ∫ xk [ hkθ(x | y,z)2 + 2hkθ(x | y,z)(fkθ (x | y)− xk) ] pD(x, y)q(z)dxkdz. (8) The term Izθ can be viewed as the loss regularizer, specifically Tikhonov regularizer, as has been shown in (Bishop, 1995). In general, not just for MSE as we have done, but for any arbitrary loss function v(fθ(x | y,z),x), we can re-write it as a combination of the losses pertaining to the signal component plus the noise component and then the same result would hold. As suggested before, the second term of error, besides being a regularizer, can also be seen as the loss for an auxiliary dataset that is interleaved with the primary dataset during training. Hence, this way it can be said that the noise augments the dataset (Goodfellow et al., 2016). This section concerned how activation noise relates to dropout, loss regularization, and data augmentation. As alluded to, activation noise during training is beneficial for the performance; but, what if we used activation noise also during inference? We will now answer this question: for EBM, activation noise can be also beneficial in inference. Specifically, we will first present the role of activation noise during inference in the EBM framework and then present the experiments yielding surprising results demonstrating the effectiveness of activation noise during inference. 3.2 INFERENCE WITH ACTIVATION NOISE Consider the inference phase assuming training has been done (i.e., when θ∗ has been determined). Given a test data point x, we estimate the energy of our trained model Eθ∗(x | y,z) with many different noise realizations z following inference noise distribution r(z) which are averaged over to produce the energy. Therefore, the noise distribution r(z) at the inference can be considered as a sampler. Probabilistically speaking, we measure the expectation of the energy Eθ∗(x | y,z) with respect to distribution r(z) as follows: Ēθ∗(x | y) = Ez [Eθ∗(x | y,z)] = ∫ z Eθ∗(x | y,z)r(z)dz. (9) The variance of Eθ∗(x | y,z), is determined by σ2 = ∫ z Eθ∗(x | y,z)2r(z)dz − Ēθ∗(x | y)2. In practice, because the calculation of the integral in Eq. 9 is intractable, we perform the inference via Êθ∗(x | y) = 1n ∑n i=1 Eθ∗(x | y,z(i)) where the integral in Eq. 9 is numerically approximated via sampling from the distribution r(z) which generates the noise samples {z(i)}. Finally, given an input x, the class label predicted by our EBM is the class with the smallest energy at x, we find the target class via ŷ = argmin y′∈Y Êθ∗ (x | y′) . This approach of classification is a common formulation for making inference which is derived from Bayes’ rule. There is one difference, however, and that is in EBM classification we seek the class whose energy is the minimum as the class that the input data belongs to. In the end note that, as discussed in this section, activation noise not only generalizes dropout during training, as a regularization scheme, but also offers the opportunity of sampling the model during inference (to minimize the inference error) possibly using a wide range of noise distributions as the sampler; this is advantageous to dropout that is only applicable during the training. In the next section, we will first present the simulation settings detailing the architectures used to conduct the experiments and then the consequent results. 4 SIMULATION SETTING AND RESULTS 4.1 SIMPLE REPRESENTATIVE SIMULATION RESULTS For the purpose of illustration, we first report only a part of our simulation setting pertaining to only one dataset, a part that is representative of the general theme of our results in this paper. This suffices for initiating the motivation required for further theoretical and empirical studies. In the next subsections, we will provide a comprehensive account of our experiments. In this simulation, we trained 10 autoencoders, one for each class of CIFAR-10 dataset. Our autoencoders incorporate the noise z̄ which is a random variable following the standard Gaussian distribution (zero mean and unit variance) that is multiplied by the scalar α for each neuron of each layer of the model as presented in Eq. 1. We use convolutional neural networks (CNNs) with channel numbers of [30, 73, 100] for the encoder, and the mirror of it for the decoder. The stride is 2, padding is 1, and the window size is 4×4. We used MSE as the loss function although binary cross-entropy (BCE) has also been tried and produced similar results. The number of samples n is set to 1,000. This simulation setting is of interest in incremental learning where only one or a few classes are present at a time (van de Ven et al., 2021). Moreover, doing classification via generative modeling fits in the framework of predictive coding which is a model of visual processing which proposes that the brain possesses a generative model of input signal with prediction loss contributing as a mean for both learning and attention (Keller & Welling, 2021). Fig. 3 shows the performances (classification accuracies) in terms of the scalar α of Gaussian noise during both training and test. The omnipresent activation noise which is present in both training and inference exhibits two surprising behaviors in our simulations: (i) it significantly improves the performance (about 200%), which is unprecedented among similar studies in the literature. (ii) The other interesting observation is that always being on or close to the diagonal leads to the best results. These observations will be discussed. 4.2 THEORIES FOR THE SIMULATION RESULTS We first ask the question that why is noise so effective in improving the performance? For that, we need to reiterate that noise is playing two different roles at (i) training and (ii) inference. For the first role, as alluded to before, what actually activation noise is doing during training is regularization. We suggested that indeed activation noise is a general case of dropout. We later demonstrated that activation noise is a form of loss regularization, and also an instance of data augmentation. All these categories fall under the umbrella of regularization. Meanwhile, we learned that, during the test phase, activation noise does sampling which reduces the inference error. In aggregation, therefore, regularization hand in hand with sampling account for this performance boost (about 200%). The other interesting observation in our simulations is that to attain the highest performance, it is best for the noise scalar α (and in general for the distributions) of training and test noise to be equal. This can be clearly observed in Fig. 3 (and also in all other numerical results that we will present later). We ask the question that why is there such a strong correlation? Shouldn’t the two roles of noise during training and test be mutually independent? Where and how do these two roles link to each other? Upon empirical and theoretical investigations, we concluded that the inter-relation between noise during training and test can be characterized and proved in the following theorem. Theorem 1 (Noise Distribution Must be the Same During Training and Test): Via having the inference noise (i.e., the sampler) r(z) following the same distribution of noise during training q∗(z), i.e., r∗(z) = q∗(z), the loss of the inference is minimized. Proof: During the inference, to optimize the performance of the EBM model, the objective is to find the distribution r∗(z) for the activation noise that minimizes the loss specified as follows: r∗(z) = argmin r(z) LML(θ∗, r(z)), LML(θ∗, r(z)) = E(x,y)∼pD; z∼r(z) [− log pθ∗(x | y,z)] . (10) Meanwhile, from Eq. 4 we know that training is performed to minimize the loss LML(θ∗, q∗(z)) with distribution q∗(z) as the activation noise. Therefore, during inference, if we set the activation noise r∗(z) the same as q∗(z) in Eq. 10, the loss again will be minimal. □ This implies that, during training, the distribution of our trained model is impacted by the distribution of noise q(z) via the second term of loss function in Eqs. 7 where the network function learns the distribution of signal in presence of a certain noise distribution. During inference, when the noise distribution is the same as in the training, the performance is the best, and as soon as there is a discrepancy between the two distributions, the performance degrades. 4.3 COMPREHENSIVE SIMULATION RESULTS FOR GAUSSIAN NOISE We present extensive simulation results for various datasets (using Google Colab Pro+) considering Gaussian activation noise. We used the same architecture as the one presented in Section 4.1 for all of our five datasets: CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), CalTech-101 (Fei-Fei et al., 2004), Flower-102 (Nilsback & Zisserman, 2008), and CUB-200 (Wah et al., 2011). Given that the volume of our experiments is large, for computational tractability we slimmed down the larger datasets from about 100 (or 200) classes to 20 classes. Our comprehensive simulation results explore the performances of our proposed EBM framework across five dimensions: (i) for five datasets we examined the joint impacts of various noise distributions. The noise scalar α varies during both (ii) training and (iii) test time within the range of [0, 2) for 20 values with the step size of 0.1. However, we do not report all of them due to the page limit; instead, we selectively present values that are representative of the two key observations which we intend to highlight: these values are (0.5, 0.5), (0.5, 0.9), (0.5, 1.5), (0.9, 0.5), (0.9, 0.9), (0.9, 1.5), and (1.5, 1.5), where the pair (·, ·) denotes the scalar α of noise at training and test. (iv) The number of samples n is set to 104, except for the cases of no/zero test noise in which n = 1, because when the scalar α of noise during test is 0 (no noise) such as in rows (0.0, 0.0) and (0.5, 0.0), the corresponding accuracy would not vary. Finally, (v) we assess the performances for different noise distributions via exploring all of their permutations. In our simulations we resize the resolution of all images for our datasets to 32×32. The number of epochs is set to 100. For optimizer, Adam is chosen with the default learning rate of 0.001 and default momentum (Kingma & Ba, 2014). The minibatch size is 256. We ran 10 experiments for each reported number in Table 1 and we present the means as well as standard error of the means (SEMs) over these runs. The first observation we intend to emphasize is that this very activation noise can contribute to a considerable improvement in performance as high as 200%. This can be discerned in Table 1 when comparing the row (0.0, 0.0) pertaining to the case with no noise, with the row (0.9, 0.9). When going from the former case to the latter case, the accuracy jumps from 20.48% to 65.88% for CIFAR-10 dataset with n = 10, 000 samples. The same phenomenon can also be observed for all other datasets. Compared with the previous studies in the literature on the effectiveness of noise, this level of performance enhancement that is acquired by injecting a simple noise is unprecedented if not unimaginable. This performance enhancement, as we theorized and proved in Theorem 1, is a result of the combination of both regularization and sampling, not in an independent way, but indeed in a complicated interdependent fashion (see Fig. 3 to discern the interdependency). The second observation signifies the importance of having a balance in injecting noise at training and test: the noise distribution at test r(z) ought to follow the distribution of the noise during training q(z) so that the loss of the inference is minimized as discussed in Theorem 1. This can be noticed when contrasting balanced rows (0.5, 0.5), (0.9, 0.9), and (1.5, 1.5) with the rest which are unbalanced. Interestingly, even the row (0.0, 0.0) which enjoys no noise outperforms unbalanced rows (0.0, 0.5), (0.5, 0.9), (0.5, 1.5), and (0.9, 1.5). The third observation is the asymmetry between having unbalanced noise for training versus test. Too large a scalar α for noise during test has far more negative impact on the performance than otherwise. For example, consider rows (0.5, 0.0) and (0.0, 0.5), the former (large noise scalar α at training) delivers about 100% higher accuracies than the latter which bears high noise scalar α at test. This pattern, with no exception, repeats in all rows where the noise scalar during test is larger than that of the training noise such as (0.0, 0.5), (0.5, 0.9), (0.5, 1.5), and (0.9, 1.5). All these cases yield poorest performances even worse than (0.0, 0.0). This observation (as well as the preceding two observations) are better discerned in Fig. 3 which portrays the heatmap of performance results. 4.4 INVESTIGATING OTHER NOISE DISTRIBUTIONS We now evaluate the performances of alternative noise distributions. We consider both types of distributions, symmetric and asymmetric: (i) Cauchy, (ii) uniform, (iii) Chi, (iv) lognormal, (v) t-Distribution, and (vi) exponential. For noise distributions that are capable of becoming either symmetric or not (e.g., uniform and Cauchy), we explore parameters that keep them symmetric around zero; because based on our experiments (as we will see) we are convinced that symmetric distributions (i.e., symmetric about zero) consistently yield higher performances. For different noise distributions, the underlying rule that justifies the results can be summarized as follows: when the activation noise z is added to the signal s, it is conducive to the performance as far as it does not entirely distort the signal. In other words, the Signal-to-Noise Ratio (SNR) can be beneficially decreased to a threshold that delivers the peak performance; after that the performance begins to degrade. Given that t = s + z = s + αz̄, and s is constant, there are two factors that determine the value of the SNR (given by var[s]/ var[z]): (i) the distribution of z̄, particularly how narrowly (or broadly) the density of the random variable is distributed (i.e., var[z̄]); (ii) how large the scalar of α is. Fig. 4 presents the profile of different probability distributions: note the breadth of various probability distributions as they play a vital role in how well they perform. In the following we discuss our results with respect to the value of the SNR. As we can see in Fig. 4, (i) adopting symmetric probability distributions as activation noise substantially outperforms asymmetric ones: hence, we observe that Cauchy, Gaussian, uniform, and t distribution consistently yield higher accuracies than Chi, exponential, and lognormal. This is because having a symmetric sampling can more effectively explore the learned manifold of the neural network. (ii) The performances of the different noise distributions can be justified with respect to the value of the SNR as follows: as we reduce the SNR via increasing the noise scalar α, the performance rises up to a peak and then falls; this is valid for all the probability distributions. The differences in the slope of various distributions (in Fig. 4 on right) originate from the difference in the profile of the noise distributions: the wider the noise profile is (the larger var[z̄]), the sooner the SNR will drop, and the earlier the pattern of rise and fall will occur. For example, in Fig. 4 because the Gaussian distribution is thinner than the Cauchy, its pattern of rise and fall happens with a small delay since its SNR drops with a latency. 5 CONCLUSION In this paper, specifically for an EBM setting, we studied the impacts of activation noise during training time, broached its role at the inference time, and scrutinized the relationship between activation noise at training and inference. We proved that, during training, activation noise is a general form of dropout; we discussed the relationship between activation noise and loss regularization/data augmentation. We studied the empirical/theoretical aspects of activation noise at inference time and proved that adopting noisy activation functions, during inference, can be interpreted as sampling the model. We proved/demonstrated that when the sampler follows the same distribution as the noise during training, the loss of the inference is minimized and therefore the performance is optimized. To assess our theoretical results, we performed extensive experiments exploring different datasets, noise scalars during training and inference, and different noise distributions. A MORE DETAILS FOR THE PERFORMANCE RESULTS OF VARIOUS NOISE DISTRIBUTIONS In the following we provide the specifications of our probability distributions: as shown in Fig. 5 for Cauchy distribution g(z̄; z0, γ), we set z0, which characterizes the mean of the Cauchy distribution, equal to zero; and explore different values of γ. The same way for uniform distribution, the mean is set to zero: while uniform distribution originally has two parameters characterizing the start and end of the random variable, in our simulation, because the uniform noise is desired to be symmetric, we define only one parameter characterizing the uniform distribution denoted by ω referring to its width. For Chi distribution g(z̄; k) we assess its performance with different values of k, whereas for lognormal distribution g(z̄;µ, σ), the value of µ is set to zero and σ is varied. For t-distribution g(z̄; ν), different values of ν are explored. Worth mentioning that for t-distribution when ν = 1, the distribution reduces to Cauchy distribution. Finally, for exponential distribution g(z̄;λ), we evaluate the performance for different values of λ. Our experiments are performed 10 times via multiple random seeds. We report the means (± SEMs) over these experiments. Overall, based on the performances across all values of the noise scalar α (as shown in Fig. 6), we conclude that the best noise distribution is also the most popular one: Gaussian distribution; perhaps because it has the largest area under the curve for different values of the noise scalar α. Furthermore, it is clear that the distributions that are similar to the Gaussian distribution deliver best performances whereas for dissimilar ones the performance becomes worse. Meanwhile, we realized that various parameters (standard deviations σ) for the Gaussian distribution do not offer convincing improvements and only hasten/delay the occurrence of the rise and fall pattern for classification accuracy. B IMPACT OF THE NUMBER OF SAMPLES In Fig. 7 we demonstrate the impact of the number of samples for the standard Gaussian noise z̄ with different noise scalar values α during training and test. It can be seen that the accuracy rises almost logarithmically as the number of samples increases. Note that these results pertain to CIFAR-10 dataset. In Table 2 we provide the comprehensive form of the Table 1 including the classification accuracies for different number of samples n. C DIFFERENT NOISE DISTRIBUTIONS DURING TRAINING AND INFERENCE We examine the performances pertaining to the combination of nonidentical noise distributions: adopting two different distributions, one for training and the other for inference. The parameters of different noise distributions are outlined in Table 3. The noise scalar α is set to one for training and test. The results of the experiment are displayed in Fig. 8 that demonstrates the performances for each two noise distributions in a 7×7 heatmap grid: the empirical results confirm our theoretical conclusions that was proposed in Theorem 1, proving that the optimal performance during inference is acquired if we have the same noise distributions during training and inference. In Fig. 8 it can be seen that the best results pertain to our four symmetric noise distributions, whereas the worst are for when an asymmetric noise is adopted for training and a symmetric noise for inference (to sample). This result is in accordance with our third observation discussed in Section 4.3 and also further corroborates Theorem 1. In Fig. 8, we can see that if we use symmetric noise during training and asymmetric noise during inference the performance will not be as poor as otherwise. This was also observed and explained in Section 4.4. D LAYER-SELECTIVE NOISE FOR THE EBM SETTING So far, we have injected noise to all the layers of our neural networks. In this section, we study the case where we inject noise only to a portion of layers of the generative EBM model. We design and compare five schemes for noise injection: (i) full noise (the default) where noise is injected to all layers (denoted by n-full); (ii) and (iii) odd/even noise where noise is injected to all odd/even layers (denoted by n-odd and n-even); (iv) and (v) encoder/decoder noise where noise is injected to encoder/decoder layers (denoted by n-enc and n-dec). Worth mentioning that we used standard Gaussian noise with scalar α = 1 pertaining to the peak performance for both training and test. Fig. 9 presents the accuracies for our neural network injected with all the noise schemes presented above. As it can be seen, injecting noise to all layers yields higher performances than the alternatives; n-even and n-odd are at the next rank while n-enc and n-dec are the worse. Fig. 9 also jointly investigates the impact of injecting noise to different layers of neural network at both training and inference. We observe the results akin to the previous results in Fig. 3 once again but for each layer: according to our results the noise injected to a specific layer during training and inference better to follow the same distribution. For example, when noise is injected to even layers (n-even) of the neural network during training, none of the other noise injection schemes during inference could exceed the performance of n-even noise injection. Also, not injecting noise during training for a specific layer while injecting noise during inference for that layer significantly reduces the performance. Accordingly, inspired by above results we can propose Corollary 1 that generalizes Theorem 1. Corollary 1 (General Noise Distribution for the Sampler During Test): Via having the sampler r(z, v) following both the same placement and distribution of noise during training q∗(z, v), i.e., r∗(z, v) = q∗(z, v), where v encodes the placement of noise among the layers, the loss of the inference is minimized. Clearly, when the placement parameter v is selected such that all layers are included for noise injection, Corollary 1 reduces to Theorem 1. E ACTIVATION NOISE IN DISCRIMINATIVE MODELING So far, we have considered only the generative EBM modeling. In this section, we examine the impact of activation noise on the performance of classifiers relying on discriminative modeling: we conduct our study with two popular datasets, CIFAR-10 and CIFAR-100, on seventeen architectures as listed in Table 4. In this experiment, we train for 30 epochs, the optimizer was Adam with learning rate 0.001 and the default momentum, and the learning rate scheduler is cosine annealing similarity. Fig. 10 demonstrates the results of our experiments: as we can see in Fig. 10 exhibiting the results of the CIFAR-10 dataset for different architectures, the performances reliably deteriorate as we increase the scalar of the omnipresent (standard Gaussian) activation noise even with balanced noise enjoying n = 1000 samples. Meanwhile, in Table 5, we report the performances of all architectures for both CIFAR-10 and CIFAR-100 with three noise levels: (i) no noise α = 0.0, (ii) mild noise α = 0.5, and (iii) strong noise α = 1.0. Based on our observations, we are convinced that adding even a mild level of balanced noise is detrimental to the performance of the discriminative modeling scenario for different architectures and datasets. To also investigate the scenario for unbalanced noise injection, we provide Fig. 11 that shows the performance heatmap of the discriminative modeling classifier using ResNet50 architecture on CIFAR-10 dataset as we increase the noise scalar α during both the training and inference. We can discern that in the unbalanced noise case, the performance drop is more severe than the balanced case. Moreover, we made three observations in Fig. 11 that are noteworthy; these observations are reminiscent of the observations that were made in Section 4. (i) In Fig. 11, the diagonal represents the balanced noise injection. It can be seen that the performance consistently deteriorates when moving from top-left entry (no noise) to bottom-right entry (intense noise); if we recall, this is exactly the opposite of the observation made in Fig. 3: there, injecting balanced noise reliably and significantly increased the performance. (ii) We observe that in Fig. 11 unbalanced noise worsens the performance which is in accordance with the observation in Fig. 3. Finally, (iii) just as in Fig. 3, there is an asymmetry in performance for having an unbalanced noise: strong noise during inference with weak noise during training delivers noticeably worse performances than otherwise. Our results for CIFAR-10 dataset in Figs. 10 and 11 carry over to CIFAR-100 dataset; hence, we refrain from reporting the results pertaining to CIFAR-100 in separate figures. From the results that have been reported in Figs. 10 and 11, and also in Table 5, it becomes clear that, in contrast to generative modeling, in discriminative modeling, the activation noise (injected to all-layers in both training and inference) indeed worsens the performance. F ON WHY ACTIVATION NOISE IS EFFECTIVE FOR GENERATIVE MODELING BUT NOT DISCRIMINATIVE MODELING We do not have any rigorous mathematical proof showing why activation noise is effective for generative modeling but not discriminative modeling. However, we can conjecture the reason. To this end, we need to take a step back and ponder about the function of neural networks. As we know, in the broad mathematical sense, neural networks are best understood as approximating often a nonlinear target function j(·) that maps input variable X to an output variable Y , where X and Y come from training data D. Specifically, the aim is learning the target function j(·) from training data D via having j(·) to perform curve-fitting on the training data D. For both the discriminative and generative modeling scenario, the dimensions of the input space X is the same; however, when it comes to the output space Y the difference emerges: in a discriminative modeling setting, the output space has far fewer dimensions than that of generative modeling. For example, in the CIFAR-10 dataset, the output space Y for the generative model has 32×32=1024 dimensions whereas the discriminative model has only 10 dimensions. Meanwhile, we know that as soon as the number of dimensions grows, the curse of dimensionality comes to play: our training data D now becomes exponentially insufficient; the data becomes sparsely distributed in the space, and instead of having a smooth continuous manifold of the data which is desired, we will have patches of data scattered in the space. This makes it hard for our j(·) to map the input to the output because neural networks, as hinted in Universal Approximation Theorem (Hornik et al., 1989), in the strict sense cannot, and in the practical sense, severely suffer when approximating functions whose samples in the input space are not smooth. The activation noise, as proved in the main text where we linked activation noise with data augmentation, imputes the primary dataset D with an auxiliary dataset D′ to smooth the data manifold, thereby mitigating the curse of dimensionality; however, when the output has fewer dimensions, as in the case of discriminative modeling, this problem is less pronounced. That said, one question that might arise is that these explanations delivered so far justify adopting activation noise during the training as an regularization scheme; how activation noise during inference comes to play and interact with the activation noise during training? When we make the manifold of our dataset D more continuous and smooth during training by augmenting it with D′, we are instructing the neural network to learn an enhanced manifold instead of the original one: a new one that is smoother, more continuous, and stretched. During inference we compute the likelihood of a given sample x under our model; if the noise scalar is considerably larger than that of training time, it causes to stretch the cloud of our sample points; then clearly the manifold that is learned by neural networks has not seen the far points of the cloud and included them in itself during training; hence, the neural network would produce unreliable noisy likelihoods for these outlier points of the cloud. This is why the performance severely deteriorates when the noise in inference has a larger scalar than that of training (even worse than no sampling/noise). In other words, larger noise scalar α at inference causes the cloud of noise samples fall outside of the convex hull of the dataset on which the neural network is trained. Eventually, as we presented in Theorem 1, the sampling cloud must be of the same shape/distribution as the training cloud of noise that is used to smooth the dataset D.
1. What is the focus and contribution of the paper on energy-based models? 2. What are the strengths of the proposed approach, particularly in terms of its application at inference time? 3. What are the weaknesses of the paper regarding its experimental results and their implications for practical use? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper generalizes and studies activation noise for both training and inference for the specific case of energy-based models. Core contributions include 1) using activation noise helping at inference time (typically dropout is not used at inference time) and 2) through study of which pairs of distributions of noise works best at training and inference time Strengths And Weaknesses Strengths This paper generalizes dropout to activation noise and studies it thoroughly using controlled set of experiments Using activation noise at inference time seems novel (I have not seen it before) and likely generally useful The paper also presents an interesting negative result that activation noise is not effective for discriminative modeling. Weakness Caption for Table 1 can be made more descriptive so that the reader can look at it in a self-contained way. While the phenomena is novel, the most impressive results are obtained using 10^4 samples on all the datasets. I'm unsure as a reader if this has practical benefit even on a moderately larger models than the paper considers. I think at least a discussion how the insights can be used would make the paper stronger. Clarity, Quality, Novelty And Reproducibility Barring a few minor issues, the paper is easy to read and the insights look novel. No issues on reproducibility.
ICLR
Title ON INJECTING NOISE DURING INFERENCE Abstract We study activation noise in a generative energy-based modeling setting during training for the purpose of regularization. We prove that activation noise is a general form of dropout. Then, we analyze the role of activation noise at inference time and demonstrate it to be utilizing sampling. Thanks to the activation noise we observe about 200% improvement in performance (classification accuracy). Later, we not only discover, but also prove that the best performance is achieved when the activation noise follows the same distribution both during training and inference. To explicate this phenomenon, we provide theoretical results that illuminate the roles of activation noise during training, inference, and their mutual influence on the performance. To further confirm our theoretical results, we conduct experiments for five datasets and seven distributions of activation noise. 1 INTRODUCTION Whether it is for performing regularization (Moradi et al., 2020) to mitigate overfitting (Kukacka et al., 2017) or for ameliorating the saturation behavior of the activation functions, thereby aiding the optimization procedure (Gulcehre et al., 2016), injecting noise to the activation function of neural networks has been shown effective (Xu et al., 2012). Such activation noise denoted by z, is added to the output of each neuron of the network (Tian & Zhang, 2022) as follows: t = s+ z = f( ∑ i wixi + b) + αz̄ (1) where wi, xi, b, s, f(·), α, z̄, and t stand for the ith element of weights, the ith element of input signal, the bias, the raw/un-noisy activation signal, the activation function, the noise scalar, the normalized noise (divorced from its scalar α) originating from any distribution, and the noisy output, respectively. Studying this setting inflicted by noisy units is of significance, because it resembles how neurons of the brain in the presence of noise learn and perform inference (Wu et al., 2001). In the literature, training with input/activation noise has been shown to be equivalent to loss regularization: a well-studied regularization scheme in which an extra penalty term is appended to the loss function. Also, injecting noise has been shown to keep the weights of the neural network small, which is reminiscent of other practices of regularization that directly limit the range of weights (Bishop, 1995). Furthermore, injecting noise to input samples (or activation functions) is an instance of data augmentation (Goodfellow et al., 2016). Injecting noise practically expands the size of the training dataset, because each time training samples are exposed to the model, random noise is added to the input/latent variables rendering them different every time they are fed to the model. Noisy samples therefore can be deemed as new samples which are drawn from the domain in the vicinity of the known samples: they make the structure of the input space smooth, thereby mitigating the curse of dimensionality and its consequent patchiness/sparsity of the datasets. This smoothing makes it easier for the neural network to learn the mapping function (Vincent et al., 2010). In the existing works, however, the impacts of activation noise have been neither fully understood during training time nor broached at the inference time, not to mention the lack of study on the relationship between having activation noise at training and inference, especially for the generative energy-based modeling (EBM). In this paper, we study those issues: for the EBM setting, for the first time, we study the empirical and theoretical aspects of activation noise not only during training time but also at inference and discuss how these two roles relate to each other. We prove that, during training, activation noise (Gulcehre et al., 2016) is a general form of dropout (Srivastava et al., 2014). This is interesting because dropout has been widely adopted as a regularization scheme. We then formulate and discuss the relationship between activation noise and two other key regularization schemes: loss regularization and data augmentation. We also prove that, during inference, adopting activation noise can be interpreted as sampling the neural network. Accordingly, with activation noise during inference, we estimate the energy of the EBM. Surprisingly, we discover that there is a very strong interrelation between the distribution of activation noise during training and inference: the performance is optimized when those two follow the same distributions. We also prove how to find the distribution of the noise during inference to minimize the inference error, thereby maximizing the performance as high as 200%. Overall, our main contributions in this paper are as follows: • We prove that, during training, activation noise is a general form of dropout. Afterward, we establish the connections between activation noise and loss regularization/data augmentation. With activation noise during inference as well as training, we observe about 200% improvement in performance (classification accuracy), which is unprecedented. Also, we discover/prove that the performance is maximized when the noise in activation functions follow the same distribution during both training and inference. • To explain this phenomenon, we provide theoretical results that illuminate the two strikingly distinct roles of activation noise during training and inference. We later discuss their mutual influence on the performance. To examine our theoretical results, we provide extensive experiments for five datasets, many noise distributions, various noise values for the noise scalar α, and different number of samples. 2 RELATED WORKS Our study touches upon multiple domains: (i) neuroscience, (ii) regularization in machine learning, (iii) generative energy-based modeling, and (iv) anomaly detection and one-class classification. (i) Studying the impact of noise in artificial neural networks (ANNs) can aid neuroscience to understand the brain’s operation (Lindsay, 2020; Richards et al., 2019). From neuroscience, we know that neurons of the brain (as formulated by Eq. 1) never produce the same output twice even when the same stimuli are presented because of their internal noisy biological processes (Ruda et al., 2020; Wu et al., 2001; Romo et al., 2003). Having a noisy population of neurons if anything seems like a disadvantage (Averbeck et al., 2006; Abbott & Dayan, 1999); then, how does the brain thwart the inevitable and omnipresent noise (Dan et al., 1998)? We provide new results on top of current evidence that noise can indeed enhance both the training (via regularization) and inference (by error minimization) (Zylberberg et al., 2016; Zohary et al., 1994). (ii) Injecting noise to neural networks is known to be a regularization scheme: regularization is broadly defined as any modification made to a learning algorithm that is intended to mitigate overfitting: reducing the generalization error but not its training error (Kukacka et al., 2017). Regularization schemes often seek to reduce overfitting (reduce generalization error) by keeping weights of neural networks small (Xu et al., 2012). Hence, the simplest and most common regularization is to append a penalty to the loss function which increases in proportion to the size of the weights of the model. However, regularization schemes are diverse (Moradi et al., 2020); in the following, we review the popular regularization schemes: weight regularization (weight decay) (Gitman & Ginsburg, 2017) penalizes the model during training based on the magnitude of the weights (Van Laarhoven, 2017). This encourages the model to map the inputs to the outputs of the training dataset such that the weights of the model are kept small (Salimans & Kingma, 2016). Batch-normalization regularizes the network by reducing the internal covariate shift: it scales the output of the layer, by standardizing the activations of each input variable per mini-batch (Ioffe & Szegedy, 2015). Ensemble learning (Zhou, 2021) trains multiple models (with heterogeneous architectures) and averages the predictions of all of them (Breiman, 1996). Activity regularization (Kilinc & Uysal, 2018b) penalizes the model during training based on the magnitude of the activations (Deng et al., 2019; Kilinc & Uysal, 2018a). Weight constraint limits the magnitude of weights to be within a range (Srebro & Shraibman, 2005). Dropout (Srivastava et al., 2014) probabilistically removes inputs during training: dropout relies on the rationale of ensemble learning that trains multiple models. However, training and maintaining multiple models in parallel inflicts heavy computational/memory expenses. Alternatively, dropout proposes that a single model can be leveraged to simulate training an expo- nential number of different network architectures concurrently by randomly dropping out nodes during training (Goodfellow et al., 2016). Early stopping (Yao et al., 2007) monitors the model’s performance on a validation set and stops training when performance starts to degrade (Goodfellow et al., 2016). Data augmentation, arguably the best regularization scheme, creates fake data and augments the training set (Hernandez-Garcia & Konig, 2018). Label smoothing (Lukasik et al., 2020) is commonly used in training deep learning (DL) models, where one-hot training labels are mixed with uniform label vectors (Meister et al., 2020). Smoothing (Xu et al., 2020) has been shown to improve both predictive performance and model calibration (Li et al., 2020b; Yuan et al., 2020). Noise schemes inject (usually Gaussian) noise to various components of the machine learning (ML) systems: activations, weights, gradients, and outputs (targets/labels) (Poole et al., 2014). In that, noise schemes provide a more generic and therefore more applicable approach to regularization that is invariant to the architectures, losses, activations of the ML systems, and even the type of problem at hand to be addressed (Holmstrom & Koistinen, 1992). As such, noise has been shown effective for generalization as well as robustness of a variety of ML systems (Neelakantan et al., 2015). (iii) Our simulation setting in this paper follows that of Generative EBM (we briefly say EBM henceforth) (LeCun et al., 2006). EBM (Nijkamp et al., 2020) is a class of maximum likelihood model that maps each input to an un-normalized scalar value named energy. EBM is a powerful model that has been applied to many different domains, such as structured prediction (Belanger & McCallum, 2016), machine translation (Tu et al., 2020), text generation (Deng et al., 2020), reinforcement learning (Haarnoja et al., 2017), image generation (Xie et al., 2016), memory modeling (Bartunov et al., 2019), classification (Grathwohl et al., 2019), continual learning (Li et al., 2020a), and biologicallyplausible training (Scellier & Bengio, 2017). (iv) Our EBM setting leverages separate autoencoders (Chen et al., 2018) for classification, in that it resembles anomaly detection scenarios (Zhou & Paffenroth, 2017; An & Cho, 2015) and also one-class classification (Ruff et al., 2018; Liznerski et al., 2020; Perera & Patel, 2019; Sohn et al., 2020). 3 EBM AND ACTIVATION NOISE DURING TRAINING AND INFERENCE EBM is a class of maximum likelihood model that determines the likelihood of a data point x ∈ X ⊆ RD using the Boltzmann distribution: pθ(x) = exp(−Eθ(x)) Ω(θ) , Ω(θ) = ∫ x∈X exp(−Eθ(x))dx (2) where Eθ(x) : RD → R, known as the energy function, is a neural network parameterized by θ, that maps each data point x to a scalar energy value, and Ω(θ) is the partition function. To solve the classification task in EBM utilizing activation noise, we adjust the general formulation of EBM in Eq. 2 as follows: given a class label y in a discrete set Y and the activation noise z during training, for each input x, we use the Boltzmann distribution to define the conditional likelihood as follows: pθ(x | y,z) = exp (−Eθ(x | y,z)) Ω(θ | y,z) , Ω(θ | y,z) = ∫ x∈X exp (−Eθ (x | y,z)) dx (3) where Eθ(x | y,z) : (RD,N,RF ) → R is the energy function that maps an input given a label and noise to a scalar energy value Eθ(x | y,z), and Ω(θ | y,z) is the normalization function. 3.1 TRAINING WITH ACTIVATION NOISE During training, we want the distribution defined by Eθ to model the data distribution pD(x, y), which we achieve by minimizing the negative log likelihood LML(θ, q(z)) of the data as follows: (θ∗, q∗(z)) = argmin θ,q(z) LML(θ, q(z)), LML(θ, q(z)) = E(x,y)∼pD; z∼q(z) [− log pθ(x | y,z)] (4) where q(z) is the distribution of the activation noise z during training. We explicate the relationships between activation noise during training and (i) dropout, (ii) loss regularization, and (iii) data augmentation. We start by presenting theoretical results illuminating that activation noise is a general form of dropout. For that, we define the negate distribution of a primary distribution, based on which we derive Activation Noise Generality Proposition, stating that dropout is a special case of activation noise. Definition 1 (Negate Random Variable): We define random variable W : Σ → E as the negate of random variable X : ∆ → F , denoted by X ∦ W (where Σ,∆,E,F ⊂ R) if the outcome of W negates the outcome of X; mathematically speaking, x+ w = 0. Proposition 1 (Activation Noise Generality Proposition): When the noise at a given neuron comes from the negate random variable Z with respect to the signal S, i.e., Z ∦ S, the activity noise drops out (the signal of) that neuron. Specifically, the summation of signal and noise becomes zero, s+ z = 0 for all outcomes. Proof: The proof follows from the definition of the negate random variable. This theoretical result implies that activation noise can be considered as a general form of dropout: Fig. 1 visualizes how the activation noise reduces to dropout. In Fig. 2 we compare the performances of activation noise with dropout with the simulation setting as for Fig. 3. Now we explain how activation noise relates to loss regularization and data augmentation. To that end, we consider the EBM setting leveraging multiple distinct autoencoders for classification: one autoencoder is used for each class. We first write the empirical error without noise in the form of the mean squared error (MSE) loss function as follows: Isθ = ∑ y∈Y ∫ x ∥fθ(x | y)− x∥2pD(x, y)dx = ∑ y∈Y ∑ k ∫ xk [ fkθ (x | y)− xk ]2 pD(x, y)dxk (5) where fθ(x | y) denotes the model parameterized by θ given label y, and the energy is determined by Eθ(x | y) = ∥fθ(x | y) − x∥2. Meanwhile, fkθ (x | y) and xk refer to the kth element of the output of the model and desired target (i.e., the original input), respectively. With activation noise, however, we have Iθ = ∑ y∈Y ∑ k ∫ z ∫ xk [ fkθ (x | y,z)− xk ]2 pD(x, y)q(z)dxkdz (6) where fkθ (x | y,z) denotes the kth element of the output of the noisy model. The noise z comes from distribution q(z) during training. Expanding the network function fkθ (x | y,z) into the signal response fkθ (x | y) and the noisy response hkθ(x | y,z) to give fkθ (x | y,z) = fkθ (x | y) + hkθ(x | y,z), (7) we can write Eq. 6 as Iθ = Isθ + I z θ , where I z θ encapsulates the noisy portion of the loss given by Izθ = ∑ y∈Y ∑ k ∫ z ∫ xk [ hkθ(x | y,z)2 + 2hkθ(x | y,z)(fkθ (x | y)− xk) ] pD(x, y)q(z)dxkdz. (8) The term Izθ can be viewed as the loss regularizer, specifically Tikhonov regularizer, as has been shown in (Bishop, 1995). In general, not just for MSE as we have done, but for any arbitrary loss function v(fθ(x | y,z),x), we can re-write it as a combination of the losses pertaining to the signal component plus the noise component and then the same result would hold. As suggested before, the second term of error, besides being a regularizer, can also be seen as the loss for an auxiliary dataset that is interleaved with the primary dataset during training. Hence, this way it can be said that the noise augments the dataset (Goodfellow et al., 2016). This section concerned how activation noise relates to dropout, loss regularization, and data augmentation. As alluded to, activation noise during training is beneficial for the performance; but, what if we used activation noise also during inference? We will now answer this question: for EBM, activation noise can be also beneficial in inference. Specifically, we will first present the role of activation noise during inference in the EBM framework and then present the experiments yielding surprising results demonstrating the effectiveness of activation noise during inference. 3.2 INFERENCE WITH ACTIVATION NOISE Consider the inference phase assuming training has been done (i.e., when θ∗ has been determined). Given a test data point x, we estimate the energy of our trained model Eθ∗(x | y,z) with many different noise realizations z following inference noise distribution r(z) which are averaged over to produce the energy. Therefore, the noise distribution r(z) at the inference can be considered as a sampler. Probabilistically speaking, we measure the expectation of the energy Eθ∗(x | y,z) with respect to distribution r(z) as follows: Ēθ∗(x | y) = Ez [Eθ∗(x | y,z)] = ∫ z Eθ∗(x | y,z)r(z)dz. (9) The variance of Eθ∗(x | y,z), is determined by σ2 = ∫ z Eθ∗(x | y,z)2r(z)dz − Ēθ∗(x | y)2. In practice, because the calculation of the integral in Eq. 9 is intractable, we perform the inference via Êθ∗(x | y) = 1n ∑n i=1 Eθ∗(x | y,z(i)) where the integral in Eq. 9 is numerically approximated via sampling from the distribution r(z) which generates the noise samples {z(i)}. Finally, given an input x, the class label predicted by our EBM is the class with the smallest energy at x, we find the target class via ŷ = argmin y′∈Y Êθ∗ (x | y′) . This approach of classification is a common formulation for making inference which is derived from Bayes’ rule. There is one difference, however, and that is in EBM classification we seek the class whose energy is the minimum as the class that the input data belongs to. In the end note that, as discussed in this section, activation noise not only generalizes dropout during training, as a regularization scheme, but also offers the opportunity of sampling the model during inference (to minimize the inference error) possibly using a wide range of noise distributions as the sampler; this is advantageous to dropout that is only applicable during the training. In the next section, we will first present the simulation settings detailing the architectures used to conduct the experiments and then the consequent results. 4 SIMULATION SETTING AND RESULTS 4.1 SIMPLE REPRESENTATIVE SIMULATION RESULTS For the purpose of illustration, we first report only a part of our simulation setting pertaining to only one dataset, a part that is representative of the general theme of our results in this paper. This suffices for initiating the motivation required for further theoretical and empirical studies. In the next subsections, we will provide a comprehensive account of our experiments. In this simulation, we trained 10 autoencoders, one for each class of CIFAR-10 dataset. Our autoencoders incorporate the noise z̄ which is a random variable following the standard Gaussian distribution (zero mean and unit variance) that is multiplied by the scalar α for each neuron of each layer of the model as presented in Eq. 1. We use convolutional neural networks (CNNs) with channel numbers of [30, 73, 100] for the encoder, and the mirror of it for the decoder. The stride is 2, padding is 1, and the window size is 4×4. We used MSE as the loss function although binary cross-entropy (BCE) has also been tried and produced similar results. The number of samples n is set to 1,000. This simulation setting is of interest in incremental learning where only one or a few classes are present at a time (van de Ven et al., 2021). Moreover, doing classification via generative modeling fits in the framework of predictive coding which is a model of visual processing which proposes that the brain possesses a generative model of input signal with prediction loss contributing as a mean for both learning and attention (Keller & Welling, 2021). Fig. 3 shows the performances (classification accuracies) in terms of the scalar α of Gaussian noise during both training and test. The omnipresent activation noise which is present in both training and inference exhibits two surprising behaviors in our simulations: (i) it significantly improves the performance (about 200%), which is unprecedented among similar studies in the literature. (ii) The other interesting observation is that always being on or close to the diagonal leads to the best results. These observations will be discussed. 4.2 THEORIES FOR THE SIMULATION RESULTS We first ask the question that why is noise so effective in improving the performance? For that, we need to reiterate that noise is playing two different roles at (i) training and (ii) inference. For the first role, as alluded to before, what actually activation noise is doing during training is regularization. We suggested that indeed activation noise is a general case of dropout. We later demonstrated that activation noise is a form of loss regularization, and also an instance of data augmentation. All these categories fall under the umbrella of regularization. Meanwhile, we learned that, during the test phase, activation noise does sampling which reduces the inference error. In aggregation, therefore, regularization hand in hand with sampling account for this performance boost (about 200%). The other interesting observation in our simulations is that to attain the highest performance, it is best for the noise scalar α (and in general for the distributions) of training and test noise to be equal. This can be clearly observed in Fig. 3 (and also in all other numerical results that we will present later). We ask the question that why is there such a strong correlation? Shouldn’t the two roles of noise during training and test be mutually independent? Where and how do these two roles link to each other? Upon empirical and theoretical investigations, we concluded that the inter-relation between noise during training and test can be characterized and proved in the following theorem. Theorem 1 (Noise Distribution Must be the Same During Training and Test): Via having the inference noise (i.e., the sampler) r(z) following the same distribution of noise during training q∗(z), i.e., r∗(z) = q∗(z), the loss of the inference is minimized. Proof: During the inference, to optimize the performance of the EBM model, the objective is to find the distribution r∗(z) for the activation noise that minimizes the loss specified as follows: r∗(z) = argmin r(z) LML(θ∗, r(z)), LML(θ∗, r(z)) = E(x,y)∼pD; z∼r(z) [− log pθ∗(x | y,z)] . (10) Meanwhile, from Eq. 4 we know that training is performed to minimize the loss LML(θ∗, q∗(z)) with distribution q∗(z) as the activation noise. Therefore, during inference, if we set the activation noise r∗(z) the same as q∗(z) in Eq. 10, the loss again will be minimal. □ This implies that, during training, the distribution of our trained model is impacted by the distribution of noise q(z) via the second term of loss function in Eqs. 7 where the network function learns the distribution of signal in presence of a certain noise distribution. During inference, when the noise distribution is the same as in the training, the performance is the best, and as soon as there is a discrepancy between the two distributions, the performance degrades. 4.3 COMPREHENSIVE SIMULATION RESULTS FOR GAUSSIAN NOISE We present extensive simulation results for various datasets (using Google Colab Pro+) considering Gaussian activation noise. We used the same architecture as the one presented in Section 4.1 for all of our five datasets: CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), CalTech-101 (Fei-Fei et al., 2004), Flower-102 (Nilsback & Zisserman, 2008), and CUB-200 (Wah et al., 2011). Given that the volume of our experiments is large, for computational tractability we slimmed down the larger datasets from about 100 (or 200) classes to 20 classes. Our comprehensive simulation results explore the performances of our proposed EBM framework across five dimensions: (i) for five datasets we examined the joint impacts of various noise distributions. The noise scalar α varies during both (ii) training and (iii) test time within the range of [0, 2) for 20 values with the step size of 0.1. However, we do not report all of them due to the page limit; instead, we selectively present values that are representative of the two key observations which we intend to highlight: these values are (0.5, 0.5), (0.5, 0.9), (0.5, 1.5), (0.9, 0.5), (0.9, 0.9), (0.9, 1.5), and (1.5, 1.5), where the pair (·, ·) denotes the scalar α of noise at training and test. (iv) The number of samples n is set to 104, except for the cases of no/zero test noise in which n = 1, because when the scalar α of noise during test is 0 (no noise) such as in rows (0.0, 0.0) and (0.5, 0.0), the corresponding accuracy would not vary. Finally, (v) we assess the performances for different noise distributions via exploring all of their permutations. In our simulations we resize the resolution of all images for our datasets to 32×32. The number of epochs is set to 100. For optimizer, Adam is chosen with the default learning rate of 0.001 and default momentum (Kingma & Ba, 2014). The minibatch size is 256. We ran 10 experiments for each reported number in Table 1 and we present the means as well as standard error of the means (SEMs) over these runs. The first observation we intend to emphasize is that this very activation noise can contribute to a considerable improvement in performance as high as 200%. This can be discerned in Table 1 when comparing the row (0.0, 0.0) pertaining to the case with no noise, with the row (0.9, 0.9). When going from the former case to the latter case, the accuracy jumps from 20.48% to 65.88% for CIFAR-10 dataset with n = 10, 000 samples. The same phenomenon can also be observed for all other datasets. Compared with the previous studies in the literature on the effectiveness of noise, this level of performance enhancement that is acquired by injecting a simple noise is unprecedented if not unimaginable. This performance enhancement, as we theorized and proved in Theorem 1, is a result of the combination of both regularization and sampling, not in an independent way, but indeed in a complicated interdependent fashion (see Fig. 3 to discern the interdependency). The second observation signifies the importance of having a balance in injecting noise at training and test: the noise distribution at test r(z) ought to follow the distribution of the noise during training q(z) so that the loss of the inference is minimized as discussed in Theorem 1. This can be noticed when contrasting balanced rows (0.5, 0.5), (0.9, 0.9), and (1.5, 1.5) with the rest which are unbalanced. Interestingly, even the row (0.0, 0.0) which enjoys no noise outperforms unbalanced rows (0.0, 0.5), (0.5, 0.9), (0.5, 1.5), and (0.9, 1.5). The third observation is the asymmetry between having unbalanced noise for training versus test. Too large a scalar α for noise during test has far more negative impact on the performance than otherwise. For example, consider rows (0.5, 0.0) and (0.0, 0.5), the former (large noise scalar α at training) delivers about 100% higher accuracies than the latter which bears high noise scalar α at test. This pattern, with no exception, repeats in all rows where the noise scalar during test is larger than that of the training noise such as (0.0, 0.5), (0.5, 0.9), (0.5, 1.5), and (0.9, 1.5). All these cases yield poorest performances even worse than (0.0, 0.0). This observation (as well as the preceding two observations) are better discerned in Fig. 3 which portrays the heatmap of performance results. 4.4 INVESTIGATING OTHER NOISE DISTRIBUTIONS We now evaluate the performances of alternative noise distributions. We consider both types of distributions, symmetric and asymmetric: (i) Cauchy, (ii) uniform, (iii) Chi, (iv) lognormal, (v) t-Distribution, and (vi) exponential. For noise distributions that are capable of becoming either symmetric or not (e.g., uniform and Cauchy), we explore parameters that keep them symmetric around zero; because based on our experiments (as we will see) we are convinced that symmetric distributions (i.e., symmetric about zero) consistently yield higher performances. For different noise distributions, the underlying rule that justifies the results can be summarized as follows: when the activation noise z is added to the signal s, it is conducive to the performance as far as it does not entirely distort the signal. In other words, the Signal-to-Noise Ratio (SNR) can be beneficially decreased to a threshold that delivers the peak performance; after that the performance begins to degrade. Given that t = s + z = s + αz̄, and s is constant, there are two factors that determine the value of the SNR (given by var[s]/ var[z]): (i) the distribution of z̄, particularly how narrowly (or broadly) the density of the random variable is distributed (i.e., var[z̄]); (ii) how large the scalar of α is. Fig. 4 presents the profile of different probability distributions: note the breadth of various probability distributions as they play a vital role in how well they perform. In the following we discuss our results with respect to the value of the SNR. As we can see in Fig. 4, (i) adopting symmetric probability distributions as activation noise substantially outperforms asymmetric ones: hence, we observe that Cauchy, Gaussian, uniform, and t distribution consistently yield higher accuracies than Chi, exponential, and lognormal. This is because having a symmetric sampling can more effectively explore the learned manifold of the neural network. (ii) The performances of the different noise distributions can be justified with respect to the value of the SNR as follows: as we reduce the SNR via increasing the noise scalar α, the performance rises up to a peak and then falls; this is valid for all the probability distributions. The differences in the slope of various distributions (in Fig. 4 on right) originate from the difference in the profile of the noise distributions: the wider the noise profile is (the larger var[z̄]), the sooner the SNR will drop, and the earlier the pattern of rise and fall will occur. For example, in Fig. 4 because the Gaussian distribution is thinner than the Cauchy, its pattern of rise and fall happens with a small delay since its SNR drops with a latency. 5 CONCLUSION In this paper, specifically for an EBM setting, we studied the impacts of activation noise during training time, broached its role at the inference time, and scrutinized the relationship between activation noise at training and inference. We proved that, during training, activation noise is a general form of dropout; we discussed the relationship between activation noise and loss regularization/data augmentation. We studied the empirical/theoretical aspects of activation noise at inference time and proved that adopting noisy activation functions, during inference, can be interpreted as sampling the model. We proved/demonstrated that when the sampler follows the same distribution as the noise during training, the loss of the inference is minimized and therefore the performance is optimized. To assess our theoretical results, we performed extensive experiments exploring different datasets, noise scalars during training and inference, and different noise distributions. A MORE DETAILS FOR THE PERFORMANCE RESULTS OF VARIOUS NOISE DISTRIBUTIONS In the following we provide the specifications of our probability distributions: as shown in Fig. 5 for Cauchy distribution g(z̄; z0, γ), we set z0, which characterizes the mean of the Cauchy distribution, equal to zero; and explore different values of γ. The same way for uniform distribution, the mean is set to zero: while uniform distribution originally has two parameters characterizing the start and end of the random variable, in our simulation, because the uniform noise is desired to be symmetric, we define only one parameter characterizing the uniform distribution denoted by ω referring to its width. For Chi distribution g(z̄; k) we assess its performance with different values of k, whereas for lognormal distribution g(z̄;µ, σ), the value of µ is set to zero and σ is varied. For t-distribution g(z̄; ν), different values of ν are explored. Worth mentioning that for t-distribution when ν = 1, the distribution reduces to Cauchy distribution. Finally, for exponential distribution g(z̄;λ), we evaluate the performance for different values of λ. Our experiments are performed 10 times via multiple random seeds. We report the means (± SEMs) over these experiments. Overall, based on the performances across all values of the noise scalar α (as shown in Fig. 6), we conclude that the best noise distribution is also the most popular one: Gaussian distribution; perhaps because it has the largest area under the curve for different values of the noise scalar α. Furthermore, it is clear that the distributions that are similar to the Gaussian distribution deliver best performances whereas for dissimilar ones the performance becomes worse. Meanwhile, we realized that various parameters (standard deviations σ) for the Gaussian distribution do not offer convincing improvements and only hasten/delay the occurrence of the rise and fall pattern for classification accuracy. B IMPACT OF THE NUMBER OF SAMPLES In Fig. 7 we demonstrate the impact of the number of samples for the standard Gaussian noise z̄ with different noise scalar values α during training and test. It can be seen that the accuracy rises almost logarithmically as the number of samples increases. Note that these results pertain to CIFAR-10 dataset. In Table 2 we provide the comprehensive form of the Table 1 including the classification accuracies for different number of samples n. C DIFFERENT NOISE DISTRIBUTIONS DURING TRAINING AND INFERENCE We examine the performances pertaining to the combination of nonidentical noise distributions: adopting two different distributions, one for training and the other for inference. The parameters of different noise distributions are outlined in Table 3. The noise scalar α is set to one for training and test. The results of the experiment are displayed in Fig. 8 that demonstrates the performances for each two noise distributions in a 7×7 heatmap grid: the empirical results confirm our theoretical conclusions that was proposed in Theorem 1, proving that the optimal performance during inference is acquired if we have the same noise distributions during training and inference. In Fig. 8 it can be seen that the best results pertain to our four symmetric noise distributions, whereas the worst are for when an asymmetric noise is adopted for training and a symmetric noise for inference (to sample). This result is in accordance with our third observation discussed in Section 4.3 and also further corroborates Theorem 1. In Fig. 8, we can see that if we use symmetric noise during training and asymmetric noise during inference the performance will not be as poor as otherwise. This was also observed and explained in Section 4.4. D LAYER-SELECTIVE NOISE FOR THE EBM SETTING So far, we have injected noise to all the layers of our neural networks. In this section, we study the case where we inject noise only to a portion of layers of the generative EBM model. We design and compare five schemes for noise injection: (i) full noise (the default) where noise is injected to all layers (denoted by n-full); (ii) and (iii) odd/even noise where noise is injected to all odd/even layers (denoted by n-odd and n-even); (iv) and (v) encoder/decoder noise where noise is injected to encoder/decoder layers (denoted by n-enc and n-dec). Worth mentioning that we used standard Gaussian noise with scalar α = 1 pertaining to the peak performance for both training and test. Fig. 9 presents the accuracies for our neural network injected with all the noise schemes presented above. As it can be seen, injecting noise to all layers yields higher performances than the alternatives; n-even and n-odd are at the next rank while n-enc and n-dec are the worse. Fig. 9 also jointly investigates the impact of injecting noise to different layers of neural network at both training and inference. We observe the results akin to the previous results in Fig. 3 once again but for each layer: according to our results the noise injected to a specific layer during training and inference better to follow the same distribution. For example, when noise is injected to even layers (n-even) of the neural network during training, none of the other noise injection schemes during inference could exceed the performance of n-even noise injection. Also, not injecting noise during training for a specific layer while injecting noise during inference for that layer significantly reduces the performance. Accordingly, inspired by above results we can propose Corollary 1 that generalizes Theorem 1. Corollary 1 (General Noise Distribution for the Sampler During Test): Via having the sampler r(z, v) following both the same placement and distribution of noise during training q∗(z, v), i.e., r∗(z, v) = q∗(z, v), where v encodes the placement of noise among the layers, the loss of the inference is minimized. Clearly, when the placement parameter v is selected such that all layers are included for noise injection, Corollary 1 reduces to Theorem 1. E ACTIVATION NOISE IN DISCRIMINATIVE MODELING So far, we have considered only the generative EBM modeling. In this section, we examine the impact of activation noise on the performance of classifiers relying on discriminative modeling: we conduct our study with two popular datasets, CIFAR-10 and CIFAR-100, on seventeen architectures as listed in Table 4. In this experiment, we train for 30 epochs, the optimizer was Adam with learning rate 0.001 and the default momentum, and the learning rate scheduler is cosine annealing similarity. Fig. 10 demonstrates the results of our experiments: as we can see in Fig. 10 exhibiting the results of the CIFAR-10 dataset for different architectures, the performances reliably deteriorate as we increase the scalar of the omnipresent (standard Gaussian) activation noise even with balanced noise enjoying n = 1000 samples. Meanwhile, in Table 5, we report the performances of all architectures for both CIFAR-10 and CIFAR-100 with three noise levels: (i) no noise α = 0.0, (ii) mild noise α = 0.5, and (iii) strong noise α = 1.0. Based on our observations, we are convinced that adding even a mild level of balanced noise is detrimental to the performance of the discriminative modeling scenario for different architectures and datasets. To also investigate the scenario for unbalanced noise injection, we provide Fig. 11 that shows the performance heatmap of the discriminative modeling classifier using ResNet50 architecture on CIFAR-10 dataset as we increase the noise scalar α during both the training and inference. We can discern that in the unbalanced noise case, the performance drop is more severe than the balanced case. Moreover, we made three observations in Fig. 11 that are noteworthy; these observations are reminiscent of the observations that were made in Section 4. (i) In Fig. 11, the diagonal represents the balanced noise injection. It can be seen that the performance consistently deteriorates when moving from top-left entry (no noise) to bottom-right entry (intense noise); if we recall, this is exactly the opposite of the observation made in Fig. 3: there, injecting balanced noise reliably and significantly increased the performance. (ii) We observe that in Fig. 11 unbalanced noise worsens the performance which is in accordance with the observation in Fig. 3. Finally, (iii) just as in Fig. 3, there is an asymmetry in performance for having an unbalanced noise: strong noise during inference with weak noise during training delivers noticeably worse performances than otherwise. Our results for CIFAR-10 dataset in Figs. 10 and 11 carry over to CIFAR-100 dataset; hence, we refrain from reporting the results pertaining to CIFAR-100 in separate figures. From the results that have been reported in Figs. 10 and 11, and also in Table 5, it becomes clear that, in contrast to generative modeling, in discriminative modeling, the activation noise (injected to all-layers in both training and inference) indeed worsens the performance. F ON WHY ACTIVATION NOISE IS EFFECTIVE FOR GENERATIVE MODELING BUT NOT DISCRIMINATIVE MODELING We do not have any rigorous mathematical proof showing why activation noise is effective for generative modeling but not discriminative modeling. However, we can conjecture the reason. To this end, we need to take a step back and ponder about the function of neural networks. As we know, in the broad mathematical sense, neural networks are best understood as approximating often a nonlinear target function j(·) that maps input variable X to an output variable Y , where X and Y come from training data D. Specifically, the aim is learning the target function j(·) from training data D via having j(·) to perform curve-fitting on the training data D. For both the discriminative and generative modeling scenario, the dimensions of the input space X is the same; however, when it comes to the output space Y the difference emerges: in a discriminative modeling setting, the output space has far fewer dimensions than that of generative modeling. For example, in the CIFAR-10 dataset, the output space Y for the generative model has 32×32=1024 dimensions whereas the discriminative model has only 10 dimensions. Meanwhile, we know that as soon as the number of dimensions grows, the curse of dimensionality comes to play: our training data D now becomes exponentially insufficient; the data becomes sparsely distributed in the space, and instead of having a smooth continuous manifold of the data which is desired, we will have patches of data scattered in the space. This makes it hard for our j(·) to map the input to the output because neural networks, as hinted in Universal Approximation Theorem (Hornik et al., 1989), in the strict sense cannot, and in the practical sense, severely suffer when approximating functions whose samples in the input space are not smooth. The activation noise, as proved in the main text where we linked activation noise with data augmentation, imputes the primary dataset D with an auxiliary dataset D′ to smooth the data manifold, thereby mitigating the curse of dimensionality; however, when the output has fewer dimensions, as in the case of discriminative modeling, this problem is less pronounced. That said, one question that might arise is that these explanations delivered so far justify adopting activation noise during the training as an regularization scheme; how activation noise during inference comes to play and interact with the activation noise during training? When we make the manifold of our dataset D more continuous and smooth during training by augmenting it with D′, we are instructing the neural network to learn an enhanced manifold instead of the original one: a new one that is smoother, more continuous, and stretched. During inference we compute the likelihood of a given sample x under our model; if the noise scalar is considerably larger than that of training time, it causes to stretch the cloud of our sample points; then clearly the manifold that is learned by neural networks has not seen the far points of the cloud and included them in itself during training; hence, the neural network would produce unreliable noisy likelihoods for these outlier points of the cloud. This is why the performance severely deteriorates when the noise in inference has a larger scalar than that of training (even worse than no sampling/noise). In other words, larger noise scalar α at inference causes the cloud of noise samples fall outside of the convex hull of the dataset on which the neural network is trained. Eventually, as we presented in Theorem 1, the sampling cloud must be of the same shape/distribution as the training cloud of noise that is used to smooth the dataset D.
1. What is the focus of the paper regarding energy-based generative models? 2. What are the strengths and weaknesses of the paper's claims and experimental results? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns regarding the paper's comparisons and regularizations? 5. Is the paper's contribution novel, or have similar studies been conducted before?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper investigates the effect of noise injection in the form of additive noise during training and inference of a specific class of neural network models energy based generative models, which are roughly variational encoders whose output is interpreted as energy: smaller energy corresponds to the correct class prediction. Several claims about properties of additive noise are made and multiple experimental results are provided. In particular, it is claimed that additive noise is equivalent to dropout and data augmentation, also that additive noise during inference improves accuracy. Experiments compare multiple different setups of the proposed noisy model on different datasets. Strengths And Weaknesses The paper demonstrates results of a very large amount of experiments, performed for different setups and with standard deviation of the run results. Unfortunately, there are no comparisons to any other regularizations and no reports of state-of-the-art results in the field, so it is very hard to judge how beneficial the proposed scheme is. The paper is greatly over-claiming the obtained results. The proposed proof of the equivalency of dropout to additive noise is based on the introduced in the paper negate random variable, which is supposed to follow the distribution of activations but being negative of the value. First, I am not sure that constructing such a variable possible in general. Second, this does not prove equivalence to binary dropout, because a mask is sampled anew every time - this will mean that additive noise has to change distribution on every step to correspond to the new mask. Moreover, dropout can be not only binary, but also continuous. The announced in the abstract 200% improvement of the accuracy is seen only in the particular setup considered. It is not specified if the initial model (without noise) has been fine-tuned to perform best on the problem. In such setup adding regularization can obviously lead to very large positive changes, not confirming that particularly additive noise can improve a model twice. Clarity, Quality, Novelty And Reproducibility The paper is written clearly, easy to follow. The novelty of the paper is under the question. For example, [1] already in 1996 considered different types of noise showing that it performs regularization. The claim of equivalence to dropout is not correct and equivalence to data noise does not require a proof. Analogously, the claim that usage of the same noise during inference and training does not require a proof. Moreover, original inference scheme of the dropout is sampling, but since it is computationally expensive the rescaling approach was proposed: thus inference with sampling is not novel as well. The supplementary material includes code, so the results are reproducible with high probability. [1] G.An "The effects of adding noise during backpropagation training on a generalization performance"
ICLR
Title ScheduleNet: Learn to Solve MinMax mTSP Using Reinforcement Learning with Delayed Reward Abstract There has been continuous effort to learn to solve famous CO problems such as Traveling Salesman Problem (TSP) and Vehicle Routing Problem (VRP) using reinforcement learning (RL). Although they have shown good optimality and computational efficiency, these approaches have been limited to scheduling a singlevehicle. MinMax mTSP, the focus of this study, is the problem seeking to minimize the total completion time for multiple workers to complete the geographically distributed tasks. Solving MinMax mTSP using RL raises significant challenges because one needs to train a distributed scheduling policy inducing the cooperative strategic routings using only the single delayed and sparse reward signal (makespan). In this study, we propose the ScheduleNet that can solve mTSP with any numbers of salesmen and cities. The ScheduleNet presents a state (partial solution to mTSP) as a set of graphs and employs type aware graph node embeddings for deriving the cooperative and transferable scheduling policy. Additionally, to effectively train the ScheduleNet with sparse and delayed reward (makespan), we propose an RL training scheme, Clipped REINFORCE with ”target net,” which significantly stabilizes the training and improves the generalization performance. We have empirically shown that the proposed method achieves the performance comparable to Google OR-Tools, a highly optimized meta-heuristic baseline. 1 INTRODUCTION There have been numerous approaches to solve combinatorial optimization (CO) problems using machine learning. Bengio et al. (2020) have categorized these approaches into demonstration and experience. In demonstration setting, supervised learning has been employed to mimic the behavior of the existing expert (e.g., exact solvers or heuristics). On the other hand, in the experience setting, typically, reinforcement learning (RL) has been employed to learn a parameterized policy that can solve newly given target problems without direct supervision. While the demonstration policy cannot outperform its guiding expert, RL-based policy can outperform the expert because it improves its policy using a reward signal. Concurrently, Mazyavkina et al. (2020) have further categorized the RL approaches into improvement and construction heuristics. An improvement heuristics start from the arbitrary (complete) solution of the CO problem and iteratively improve it with the learned policy until the improvement stops (Chen & Tian, 2019; Ahn et al., 2019). On the other hand, the construction heuristics start from the empty solution and incrementally extend the partial solution using a learned sequential decision-making policy until it becomes complete. There has been continuous effort to learn to solve famous CO problems such as Traveling Salesman Problem (TSP) and Vehicle Routing Problem (VRP) using RL-based construction heuristics (Bello et al., 2016; Kool et al., 2018; Khalil et al., 2017; Nazari et al., 2018). Although they have shown good optimality and computational efficiency performance, these approaches have been limited to only scheduling a single-vehicle. The multi-extensions of these routing problems, such as multiple TSP and multiple VRP, are underrepresented in the deep learning research community, even though they capture a broader set of the real-world problems and pose a more significant scientific challenge. The multiple traveling salesmen problem (mTSP) aims to determine a set of subroutes for each salesman, given m salesmen and N cities that need to be visited by one of the salesmen, and a depot where salesmen are initially located and to which they return. The objective of a mTSP is either minimizing the sum of subroute lengths (MinSum) or minimizing the length of the longest subroute (MinMax). In general, the MinMax objective is more practical, as one seeks to visit all cities as soon as possible (i.e., total completion time minimization). In contrast, the MinSum formulation, in general, leads to highly imbalanced solutions where one of the salesmen visits most of the cities, which results in longer total completion time (Lupoaie et al., 2019). In this study, we propose a learning-based decentralized and sequential decision-making algorithm for solving Minmax mTSP problem; the trained policy, which is a construction heuristic, can be employed to solve mTSP instances with any numbers of salesman and cities. Learning a transferable mTSP solver in a construction heuristic framework is significantly challenging comparing to its single-agent variants (TSP and CVRP) because (1) we need to use the state representation that is flexible enough to represent any arbitrary number of salesman and cities (2) we need to introduce the coordination among multiple agents to complete the geographically distributed tasks as quickly as possible using a sequential and decentralized decision making strategy and (3) we need to learn such decentralized cooperative policy using only a delayed and sparse reward signal, makespan, that is revealed only at the end of the episode. To tackle such a challenging task, we formulate mTSP as a semi-MDP and derive a decentralized decision making policy in a multi-agent reinforcement learning framework using only a sparse and delayed episodic reward signal. The major components of the proposed method and their importance are summarized as follows: • Decentralized cooperative decision-making strategy: Decentralization of scheduling policy is essential to ensure the learned policy can be employed to schedule any size of mTSP problems in a scalable manner; decentralized policy maps local observation of each idle salesman one of feasible individual action while joint policy maps the global state to the joint scheduling actions. • State representation using type-award graph attention (TGA): the proposed method represents a state (partial solution to mTSP) as a set of graphs, each of which captures specific relationships among works, cities, and a depot. The proposed method then employs TGA to compute the node embeddings for all nodes (salesman and cities), which are used to assign idle salesman to an unvisited city sequentially. • Training decentralized policy using a single delayed shared reward signal: Training decentralized cooperative strategy using a single sparse and delayed reward is extremely difficult in that we need to distribute credits of a single scalar reward (makespan) over the time and agents. To resolve this, we propose a stable MARL training scheme which significantly stabilizes the training and improves the generalization performance. We have empirically shown that the proposed method achieves the performance comparable to Google OR-Tools, a highly optimized meta-heuristic baseline. The proposed approach outperforms OR-Tools in many cases on in-training, out-of-training problem distributions, and real-world problem instances. We also verified that scheduleNet can provide an efficient routing service to customers. 2 RELATED WORK Construction RL approaches A seminal body of work focused on the construction approach in the RL setting for solving CO problems (Bello et al., 2016; Nazari et al., 2018; Kool et al., 2018; Khalil et al., 2017). These approaches utilize encoder-decoder architecture, that encodes the problem structure into a hidden embedding first, and then autoregressively decodes the complete solution. Bello et al. (2016) utilized LSTM (Hochreiter & Schmidhuber, 1997) based encoder and decode the complete solution (tour) using Pointer Network (Vinyals et al., 2015) scheme. Since the routing tasks are often represented as graphs, Nazari et al. (2018) proposed an attention based encoder, while using LSTM decoder. Recently, Kool et al. (2018) proposed to use Transformer-like architecture (Vaswani et al., 2017) to solve several variants of TSP and single-vehicle CVRP. On the contrary, Khalil et al. (2017) do not use encoder-decoder architecture, but a single graph embedding model, structure2vec (Dai et al., 2016), that embeds a partial solution of the TSP and outputs the next city in the (sub)tour. (Kang et al., 2019) has extended structure2vec to random graph and employed this random graph embedding to solve identical parallel machine scheduling problems, the problem seeking to minimize the makespan by scheduling multiple machines. Learned mTSP solvers The machine learning approaches for solving mTSP date back to Hopfield & Tank (1985). However, these approaches require per problem instance training. (Hopfield & Tank, 1985; Wacholder et al., 1989; Somhom et al., 1999). Among the recent learning methods, Kaempfer & Wolf (2018) encodes MinSum mTSP with a set-specialized variant of Transformer architecture that uses permutation invariant pooling layers. To obtain the feasible solution, they use a combination of the softassign method Gold & Rangarajan (1996) and a beam search. Their model is trained in a supervised setting using mTSP solutions obtained by Integer Linear Programming (ILP) solver. Hu et al. (2020) utilizes a GNN encoder and self-attention Vaswani et al. (2017) policy outputs a probability of assignment to each salesman per city. Once cities are assigned to specific salesmen, they use existing TSP solver, OR-Tools (Perron & Furnon), to obtain each worker’s subroutes. Their method shows impressive scalability in terms of the number of cities, as they present results for mTSP instances with 1000 cities and ten workers. However, the trained model is not scalable in terms of the number of workers and can only solve mTSP problems with a pre-specified, fixed number of workers. 3 PROBLEM FORMULATION We define the set of m salesmen indexed by VT = {1, 2, ...,m}, and the set of N cities indexed by VC = {m + 1, 2, ...,m + N}. Following mTSP conventions, we define the first city as the depot. We also define the 2D-coordinates of entities (salesmen, cities, and the depot) as pi. The objective of MinMax mTSP is to minimize the length of the longest subtour of salesmen, while subtours covers all cities and all subtours of salesmen end at the depot. For the clarity of explanation, we will refer to salesman as a workers, and cities as a tasks. 3.1 MDP FORMULATION FOR MINMAX MTSP In this paper, the objective is to construct an optimal solution with a construction RL approach. Thus, we cast the solution construction process of MinMax mTSP as a Markov decision process (MDP). The components of the proposed MDP are as follows. Transition The proposed MDP transits based on events. We define an event as the the case where any worker reaches its assigned city. We enumerate the event with the index τ for avoiding confusion from the elapsed time of the mTSP problem. t(τ) is a function that returns the time of event τ . In the proposed event-based transition setup, the state transitions coincide with the sequential expansion of the partial scheduling solution. State Each entity i has its own state siτ = ( piτ ,1 active τ ,1 assigned τ ) at the τ -th event. the coordinates piτ is time-dependent for workers and static for tasks and the depot. Indicator 1activeτ describes whether the entity is active or inactive In case of tasks, inactive indicates that the task is already visited; in case of worker, inactive means that worker returned to the depot. Similarly, 1assignedτ indicates whether worker is assigned to a task or not. We also define the environment state senvτ that contains the current time of the environment, and the sequence of tasks visited by each worker, i.e., partial solution of the mTSP. The state sτ of the MDP at the τ -th event becomes sτ = ( {siτ}m+Ni=1 , senvτ ) . The first state s0 corresponds to the empty solution of the given problem instance, i.e., no cities have been visited, and all salesmen are in the depot. The terminal state sT corresponds to a complete solution of the given mTSP instance, i.e., when every task has been visited, and every worker returned to the depot (See Figure 1). Action A scheduling action aτ is defined as the worker-to-task assignment, i.e. salesman has to visit the assigned city. Reward. We formulate the problem in a delayed reward setting. Specifically, the sparse reward function is defined as r(sτ ) = 0 for all non-terminal events, and r(sT) = t(T), where T is the index of the terminal state. In other words, a single reward signal, which is obtained only for the terminal state, is equals to the makespan of the problem instance. 4 SCHEDULENET Given the MDP formulation for MinMax mTSP, we propose ScheduleNet that can recommend a scheduling action aτ given the current state Gτ represented as a graph, i.e., πθ(aτ |Gτ ). The SchedulNet first presents a state (partial solution of mTSP) as a set of graphs, each of which captures specific relationships among workers, tasks, and a depot. Then ScheduleNet employs type-aware graph attention (TGA) to compute the node embeddings and use the computed node embeddings to determine the next assignment action (See figure 2). 4.1 WORKER-TASK GRAPH REPRESENTATION Whenever an event occurs and the global state sτ of the MDP is updated at τ , ScheduleNet constructs a directed complete graph Gτ = (V,E) out of sτ , where V = VT ∪ VC is the set of nodes and E is the set of edges. We drop the time iterator τ to simplify the notations since the following operations only for the given time step. The nodes and edges and their associated features are defined as: • vi denotes the node corresponding entity i in mTSP problem. The node feature xi for vi is equal to the state siτ of entity i. In addition, ki denote the type of node vi. For instance, if the entity i is worker and its 1activeτ = 1, then the ki becomes active-worker type. • eij denotes the edge between between source node vj and destination node vi, representing the relationships between the two. The edge feature wij is equal to the Euclidean distance between the two nodes. 4.2 TYPE-AWARE GRAPH ATTENTION EMBEDDING In this section, we describe a type-aware graph attention (TGA) embedding procedure. We denote hi and hij as the node and the edge embedding, respectively, at a given time step, and h′i and h ′ ij as the updated embedding by TGA embedding. A single iteration of TGA embedding consists of three phases: (1) edge update, (2) message aggregation, and (3) node update. Type-aware Edge update Given the node embeddings hi for vi ∈ V and the edge embeddings hij for eij ∈ E, ScheduleNet computes the updated edge embedding h′ij and the attention logit zij as: h′ij = TGAE([hi, hj , hij ], kj) zij = TGAA([hi, hj , hij ], kj) (1) where TGAE and TGAA are, respectively, the type-aware edge update function and the type-aware attention function, which are defined for the specific type kj of the source node vj . The updated edge feature h′ij can be thought of as the message from the source node vj to the destination node vi, and the attention logit zij will be used to compute the importance of this message. In computing the updated edge feature (message), TGAE and TGAA first compute the “type-aware” edge encoding uij , which can be seen as a dynamic edge feature varying depending on the source node type, to effectively model the complex type-aware relationships among the nodes. Using the computed “type-aware” edge encoding uij , these two functions then compute the updated edge feature and attention logit using a multiplicative interaction (MI) layer (Jayakumar et al., 2019). The use of MI layer significantly reduces the number of parameters to learn without discarding the expressibility of the embedding procedure. The detailed architecture for TGAE and TGAA are provided in Appendix A.4. Type-aware Message aggregation The distribution of the node types in the mTSP graphs is highly imbalanced, i.e., the number of task-specific node types is much larger than the worker specific ones. This imbalance is problematic, specifically, during the message aggregation of GNN, since permutation invariant aggregation functions are akin to ignore messages from few-but-important nodes in the graph. To alleviate such an issue, we propose the following type-aware message aggregation scheme. We first define the type k neighborhood of node vi as the set of the k typed source nodes that are connected to the destination node vi, i.e., Nk(i) = {vl|kl = k, ∀vl ∈ N (i)}, where N (i) is the in-neighborhood set of node vi containing the nodes that are connected to node vi with incomingedges. The node vi aggregates separately messages from the same type of source nodes. For example, the aggregated message mki from k-type source nodes is computed as: mki = ∑ j∈Nk(i) αijh ′ ij (2) where αij is the attention score computed using the attention logits computed before as: αij = exp(zij)∑ j∈Nk(i) exp(zij) (3) Finally, all aggregated messages per type are concatenated to produce the total aggregated message mi for node vi as mi = concat({mki |k ∈ K}) (4) Type-aware Node update The aggregated message mi for node vi is then used to compute the updated node embedding h ′ i using the type-aware graph node update function TGAV as: h′i = TGAV([hi,mi], ki) (5) 4.3 ASSIGNMENT PROBABILITY COMPUTATION ScheduleNet model consists of two type-aware graph embedding layers that utilize the embedding procedure explained in the section above. The first embedding layer raw-2-hid is used to encode initial node and edge features xi and wij of the (full) graph Gτ , to obtain initial hidden node and edge features h(0)i and h (0) ij , respectively. We define the target subgraph Gsτ as the subset of nodes and edges from the original (full) graph Gτ that only includes a target-worker (unassigned-worker) node and all unassigned-city nodes. The second embedding layer hid-2-hid embeds the target subgraph Gsτ , H times. In other words, a hidden node and edge embeddings h(0)i and h (0) ij are iteratively updated H times to obtain final hidden embeddings h(H)i and h (H) ij , respectively. The final hidden embeddings are then used to make decision regarding the worker-to-task assignment. Specifically, probability of assigning target worker i to task j is computed as yij = MLPactor(h (H) i ;h (H) j ;h (H) ij ) pij = softmax({yij}j∈ A(Gτ )) (6) where the h(H)i , and h (H) ij is the final hidden node, edge embeddings, respectively. In addition, A(Gτ ) denote the set of feasible actions defined as {vj |kj = “Unassigned-task”∀j ∈ V}. 5 TRAINING SCHEDULENET In this section, we describe the training scheme of the ScheduleNet. Firstly, we explain reward normalization scheme which is used to reduce the variance of the reward. Secondly, we introduce a stable RL training scheme which significantly stabilizes the training process. Makespan normalization As mentioned in Section 3.1, we use the makespan of mTSP as the only reward signal for training RL agent. We denote the makespan of given policy π as M(π). We observe that, the makespan M(π) is a highly volatile depending on the problem size (number of cities and salesmen), the topology of the map, and the policy. To reduce the variance of the reward, we propose the following normalization scheme: m(π, πb) = M(πb)−M(π) M(πb) (7) where π and πb is the evaluation and baseline policy, respectively. The normalized makespan m(π, πb) is similar to (Kool et al., 2018), but we additionally divide the performance difference by the makespan of the baseline policy, which further reduces the variance that is induced by the size of the mTSP instance. From the normalized terminal reward m(π, πb), we compute the normalized return as follows: Gτ (π, πb) := γ T−τm(π, πb) (8) where T is the index of the terminal state, and γ is the discount factor. The normalized return Gτ (π, πb) becomes smaller and converges to (near) zero as τ decreases. From the perspective of the RL agent, it allows to the agent to acknowledge neutrality of current policy compared to the baseline policy for the early phase of the MDP trajectory. It is natural since knowing the relative goodness of the policy is hard from the early phase of the MDP. Stable RL training It is well known that the solution quality of CO problems, including the makespan of mTSP, is extremely sensitive to the action selection, and it thus prevents the stable policy learning. To address this problem, we propose the clipped REINFORCE, a variant PPO without the learned value function. We empirically found that it is hard to train the value function1, thus we use normalized returns Gτ(πθ, πb) directly. Then, the objective of the clipped REINFORCE is given as follows: L(θ) = E πθ [ T∑ τ=0 [min(clip(ρτ , 1− , 1 + )Gτ(πθ, πb), ρτGτ(πθ, πb))] ] (9) 1Note that the value function is trained to predict the makespan of the state to serve as an advantage estimator. Due to the combinatorial nature of the mTSP, the target of value function, makespan, is highly volatile, which makes training value function hard. We further discuss this in the experiment section. where ρτ = πθ(aτ |Gτ ) πθold(aτ |Gτ ) (10) and (Gτ , aτ ) ∼ πθ is the state-action marginal following πθ, and πθold is the old policy. Training detail We used the greedy version of current policy as the baseline policy πb. After updating the policy πθ, we smooth the parameters of policy πθ with the Polyak average (Polyak & Juditsky, 1992) to further stabilize policy training. The pseudo code of training and network architecture is given in Appedix A.5.1. 6 EXPERIMENTS We train the ScheduleNet using mTSP instances whose number m of workers and the number N of tasks are sampled from m ∼ U(2, 4) and N ∼ U(10, 20), respectively. This trained ScheduleNet policy is then evaluated on the various dataset, including randomly generated uniform mTSP datasets, mTSPLib (mTS), and randomly generated uniform TSP dataset, TSPLib, and TSP (dai). See Appendix for further training details. 6.1 MTSP RESULTS Random mTSP results We firstly investigate the generalization performance of ScheduleNet on the randomly generated uniform maps with varying numbers of tasks and workers. We report the results of OR-Tools and 2Phase heuristics; 2Phase Nearest Insertion (NI), 2Phase Farthest Insertion (FI), 2Phase Random Insertion (RI), and 2Phase Nearest Neighbor (NN). The 2Phase heuristics construct sub-tours by (1) clustering cities with clustering algorithm, and (2) applying the TSP heuristics within the cluster. The details of implementation are provided in the appendix. Table 1 shows that ScheduleNet in overall produces a slightly longer makespan than OR-Tools even for the large-sized mTSP instances. As the complexity of the target mTSP instance increases, the gap between ScheduleNet and OR-Tools decreases, even showing the cases where ScheduleNet outperforms OR-Tools. To further clarify, ScheduleNet has potentials for winning the OR-Tools on small and large cases as shown in the figure 3. This result empirically proves that ScheduleNet, even trained with small-sized mTSP instances, can solve large scale problems well. Notably, on the large scale maps, 2-Phase heuristics show their general effectiveness due to the uniformity of the city positions. It naturally invokes us to consider more realistic problems as discussed in the following section. mTSPLib results The trained ScheduleNet is employed to solve the benchmark problems in mTSPLib, without additional training, to validate the generalization capability of ScheduleNet on unseen mTSP instances, where the problem structure can be completely different from the instances used during training. Table 2 compares the performance of the ScheduleNet to other baseline models, including CPLEX (optimal solution), OR-Tools, and other meta-heuristics (Lupoaie et al., 2019); self-organization Map (SOM), ant-colony Optimization (ACO), and evolutionary algorithm (EA). We report the best known upper-bound for CPLEX results whenever the optimal solution is not known. OR-Tools generally shows promising results. Interestingly, OR-Tools also discovers the solution even better than the known upper-bounds. (e.g., eil76-m=5,7, rat99-m=5) That is possible for the large cases the search space of the exact method, CLPEX, becomes easily prohibitively large. Our method shows the second-best performance following OR-tools. The winning heuristic methods, 2Phase-NI/RI, shows drastic performance degradation on mTSPLib maps. It is noteworthy that our method, even in the zero-shot setting, performs better than the meta-heuristic methods, which perform optimization to solve each benchmark problem. Computational times The mixed-integer linear programming (MILP) formulated mTSP problem becomes quickly intractable due to the exponential growth of search space, namely subtour elim- ination constraint (SEC), as the number of workers increases. The computational gain of (Meta) heuristics, including the proposed method and OR-Tools, originates from the effective heuristics that trims out possible tours. The computational times of ScheduleNet linearly increase as the number of worker m increases for the number for the fixed number of task N due to the MDP formulation of mTSP. On the contrary, it is found that the computation times of OR-Tools depend on m and N , and also graph topology. As a result, the ScheduleNet becomes faster than OR-Tools for large instances as shown in figure 6. 6.2 EFFECTIVENESS OF THE PROPOSED TRAINING SCHEME Figure 5 compares the training curves of ScheduleNet and its variants. We firstly show the effectiveness of the proposed sparse reward compared to the dense reward functions; distance reward and distance-utilization reward. The distance reward is defined as the negative distance between the current worker position and the assigned city. This reward function is often used for solving TSP(Dai et al., 2016). The distance-utilization is defined as distance reward over the number of active workers. This reward function aims to minimize the (sub) tour distances while maximizing the utilization of the workers. The proposed sparse reward is the only reward function that can train ScheduleNet stable and achieves the minimal gaps, also as shown in 5 [Left]. We also validate the effectiveness of Clipped REINFORCE compared to the actor-critic counterpart, PPO. We use the same network architecture of the Clipped REINFORCE model for the actor and critic of the PPO model. Counter to the common belief, The actor-critic method (PPO) is not superior to the actor-only method (Clipped REINFORCE) as shown in 5 [Right]. We hypothesize this phenomenon is because the training target of the critic (sampled makepsan) is highly volatile and multimodal as visualized in Figure 4 and the value prediction error would deteriorate the policy due to the bellman error propagation in actor-critic setup as discussed in Fujimoto et al. (2018). 7 CONCLUSION We proposed ScheduleNet for solving MinMax mTSP, the problem seeking to minimize the total completion time for multiple workers to complete the geographically distributed tasks. The use of type-aware graphs and the specially designed TGA graph node embedding allows the trained ScheduleNet policy to induce the coordinated strate- gic subroutes of the workers and to be well transferred to unseen mTSP with any numbers of workers and tasks. We have empirically shown that the proposed method achieves the performance comparable to Google OR-Tools, a highly optimized meta-heuristic baseline. All in all, this study has shown the potential that the proposed ScheduleNet can be effectively used to schedule multiple vehicles for solving large-scale, practical, real-world applications. A APPENDIX A.1 DETAILS OF MDP TRANSITION AND GRAPH FORMULATION Event based MDP transition The formulated semi-MDP for ScheduleNet is event-based. Thus, whenever all workers are assigned to cities, the environment transits in time, until any of the workers arrives to the city (i.e. completes the task). Arrival of the worker to the city is the event trigger, meanwhile the other assigned workers are still on the way to their correspondingly assigned cities. We assume that each worker transits towards the assigned city with unit speed in the 2D Euclidean space, i.e. the distance travelled by each worker equals the time past between two consecutive MDP events. Graph formulation In total our graph formulation includes seven mutually exclusive node type: (1) assigned-worker, (2) unassigned-worker, (3) inactive-worker, (4) assigned-city, (5) unassignedcity, (6) inactive-city, and (7) depot. Here, the set of active workers (cities) is defined by the union of assigned and unassigned workers (cities). Inactive-city node refers to the city that has been already visited, while the inactive-worker node refers to the worker that has finished its route and returned to the depot. A.2 DETAILS OF IMPLEMENTATION 2phase mTSP heuristics 2phase heuristics for mTSP is an extension of well-known TSP heuristics to the m > 1 cases. First, we perform K-means spatial clustering of cities in the mTSP instance, where K = m. Next, we apply TSP insertion heuristics (Nearest Insertion, Farthest Insertion, Random Insertion, and Nearest Neighbour Insertion) for each cluster of cities. It should be noted that, performance of the 2phase heuristics is highly depended on the spatial distribution of the cities on the map. Thus 2phase heuristics perform particularly well on uniformly distributed random instances, where K-means clustering can obtain clusters with approximately same number of cities per cluster. Proximal Policy Optimization Our implementation of PPO closely the standard implementation of PPO2 from stable-baselines (Hill et al., 2018) with default hyperparameters, with modifications to allow for distributed training with Parameter Server. A.3 COMPUTATION TIME Figure 6 shows the computation time curves as the function of number of cities (left), and number of workers (right). Overall, ScheduleNet is faster than OR-Tools, and the difference in computation speed only increases with the problem size. Additionally, ScheduleNet’s computation time depends only on the problem size (N + m), whereas the computation time of OR-Tools on both the size of the problem and the topology of the underlying mTSP instance. In other words, the number of solutions searched by OR-Tools vary depending on the underlying problem. Another computational and practical advantage of the ScheduleNet is its invariance to the number of workers. Computational complexity of ScheduleNet increases linearly with the number workers. On the other hand, the search space of meta-heuristic algorithms drastically increase with the number of workers, possibly, due to the exponentially increasing number Subtour Elimination Constraints (SEC). Particularly, we investigated that OR-Tools decreases the search space, by deactivating part of the workers, i.e. not utilizing all possible partial solutions (subtours). As a result, Figure 6 shows that computation time of the OR-Tools actually decrease due to deactivation part of workers, at the expense of the decreasing solution quality. A.4 DETAILS OF TYPE-AWARE GRAPH ATTENTION EMBEDDING In this section, we thoroughly describe a type-aware graph embedding procedure. Similar to the main body, We overload notations for the simplicity of notation such that the input node and edge feature as hi and hij , and the embedded node and edge feature h′i and h ′ ij , respectively. The proposed graph embedding step consists of three phases: (1) type-aware edge update, (2) typeaware message aggregation, and (3) type-aware node update. Type-aware Edge update The edge update scheme is designed to reflect the complex type relationship among the entities while updating edge features. First the context embedding cij of edge eij computed using the source node type kj such that: cij = MLPetype(kj) (11) where MLPetype is the edge type encoder. The source node types are embedded into the context embedding cij using MLPetype. Next, the type-aware edge encoding uij is computed using the Multiplicative Interaction (MI) layer (Jayakumar et al., 2019) as follows: uij = MIedge([hi;hj ;hij ], cij) (12) where MIedge is the edge MI layer. We utilize the MI layer, which dynamically generates its parameter depending on the context cij and produces “type-aware” edge encoding uij , to effectively model the complex type relationships among the nodes. “type-aware” edge encoding uij can be seen as a dynamic edge feature which varies depending on the source node type. After the updated edge embedding h′ij and its attention logit zij is obtained as: h′ij = MLPedge(uij) (13) zij = MLPattn(uij) (14) where MLPedge and MLPattn is the edge updater and logit function, respectively. the edge updater and logit function produces updated edge embdding and logits from the “type-aware” edge. The computation steps of equation 11, 12, and 13 are defined as TGAE. Similarly, the computation steps of equation 11, 12, and 14 are defined as TGAA. Message aggregation First, we define the type-k neighborhood of node vi such that Nk(i) = {vl|kl = k, ∀vl ∈ N (i)}, where N (i) is the in-neigborhood set of node i. i.e., The type-k neighborhood is the set of edges heading to node i, and their source nodes have type k. The proposed type-aware message aggregation procedure computes attention score αij for the eij , which starts from node j and heads to node i, such that: αij = exp(zij)∑ l∈Nkj (i) exp(zil) (15) Intuitively speaking, The proposed attention scheme normalizes the attention logits of incoming edges over the types. Therefore, the attention scores sum up to 1 over each type-k neighborhood. Next, the type-k neighborhood message mi,k for node vi is computed as: mki = ∑ j∈Nk(i) αijh ′ ij (16) In this aggregation step, the incoming messages of node i are aggregated type-wisely. Finally, all incoming type neighborhood messages are concatenated to produce (inter-type) aggregated message mi for node vi, such that: mi = concat({mki |k ∈ K}) (17) Node update Similar to the edge update phase, first, the context embedding ci is computed for each node vi: ci = MLPntype(ki) (18) where MLPntype is the node type encoder. Then, the updated hidden node embedding h′i is computed as below: h′i = MLPnode(hi, ui) (19) where ui = MInode(mi, ci) is the type-aware node embedding that is produced by MInode layer using aggregated messages mi and the context embedding ci. The computation steps of equation 18, and 19 are defined as TGAE. The overall computation procedure TGA is illustrated in Figure 7. A.5 DETAILS OF SCHEDULENET TRAINING A.5.1 TRAINING PSEUDO CODE In this section, we presents a pseudocode for training ScheduleNet. Algorithm 1: ScheduleNet Training Input: Training policy πθ Output: Smoothed policy πφ 1 Initialize smoothed policy with parameters φ← θ. 2 for update step do 3 Generate a random mTSP instance I 4 for number of episodes do 5 Construct mTSP MDP from the instance I 6 πb ← argmax(πθ) 7 Collect samples with πθ and πb from the mTSP MDP. 8 πθold ← πθ 9 for inner updates K do 10 θ ← θ + α∇θL(θ) 11 φ← βφ+ (1− β)θ A.5.2 HYPERPARAMETERS In this section, we fully explain hyperparameters of ScheduleNet. Network Architecture We use the same hyperparmeters for the raw-2-hid TGA layer and the hid2-hid TGA layer. MLPetype and MLPntype has one hidden layer with 32 neurons and their output dimensions are both 32. Both MI layers has 64 dimensional outputs. MLPedge, MLPattn, and MLPnode has 2 hidden layers with 32 neurons. MLPactor has 2 hidden layers and the hidden layers has 128 neurons each. We use ReLU activation functions for all hidden layers. The hidden graph embedding step H is two. Training We use the discount factor γ of 0.7. We use Adam (Kingma & Ba, 2014) with learning rate value of 0.001. We set the clipping parameter as 0.2. We sample 40 independent mTSP trajectories per gradient update. We clipped the maximum gradient whenever the norm of gradient is larger than 0.5. Inner updates steps K is three. The smoothing parameter β is 0.95. A.6 TRANSFERABILITY TEST ON TSP (m = 1) The trained ScheduleNet has been employed to solve random TSP instances. Because ScheduleNet can be used to schedule any m number of workers, if we set m = 1, it can be used to schedule TSP instance without further training. Table 3 shows the results on this transferability experiments. Table 3 shows that the trained ScheduleNet can solve reasonably well on random TSP instances, although ScheduleNet has never been exposed to such TSP instances. Note that as the size of TSP increases, the gap between the ScheduleNet and other models becomes smaller. If the ScheduleNet is trained with TSP instances with m = 1, the performance can be further improved. However, we did not try that experiment to check its transferability over different types of routing problems with different objectives. A.7 EXTENDED MTSPLIB RESULTS M et ahe ur is tic s H eu ri st ic s m C PL E X O R SN SO M A C O E A 2p ha se N I 2p ha se FI 2p ha se R I 2p ha se N N ei l5 1 2 22 2. 73 24 3. 02 25 9. 67 27 8. 44 24 8. 76 27 6. 62 27 1. 25 31 1. 26 26 5. 85 38 7. 20 3 15 9. 57 17 0. 05 17 2. 16 21 0. 25 18 0. 59 20 8. 16 20 2. 85 21 8. 71 19 5. 89 22 2. 13 5 12 3. 96 12 7. 5 11 8. 94 15 7. 68 13 5. 09 15 1. 21 18 3. 53 18 0. 21 15 0. 49 21 0. 75 7 11 2. 07 11 2. 07 11 2. 42 13 6. 84 11 9. 96 12 3. 88 12 9. 65 14 4. 11 12 7. 72 14 7. 81 be rl in 2 41 10 .2 1 46 65 .4 7 48 16 .3 53 50 .8 3 43 88 .9 9 50 38 .3 3 59 41 .0 3 66 05 .0 5 57 85 .0 0 65 19 .5 1 3 32 44 .3 7 33 11 .3 1 33 72 .1 4 41 97 .6 1 34 68 .9 38 65 .4 5 38 11 .4 9 40 37 .1 0 41 33 .8 5 35 81 .2 1 5 24 41 .3 9 24 82 .5 7 26 15 .5 7 34 61 .9 3 27 33 .5 6 28 53 .6 3 29 72 .5 7 40 37 .1 0 41 08 .5 8 35 81 .2 1 7 24 40 .9 2 24 40 .9 2 25 76 .0 4 31 25 .2 1 25 10 .0 9 25 43 .7 3 29 72 .5 7 30 33 .0 0 29 98 .2 0 31 98 .1 8 ei l7 6 2 28 0. 85 31 8 33 4. 1 36 4. 02 30 8. 53 36 5. 72 36 3. 21 40 3. 56 39 5. 15 37 3. 75 3 19 7. 34 21 2. 41 22 6. 54 27 8. 63 22 4. 56 28 5. 43 30 2. 1 27 9. 33 27 6. 58 35 7. 77 5 15 0. 3 14 3. 38 16 8. 03 21 0. 69 16 3. 93 21 1. 91 19 1. 41 20 4. 16 18 5. 77 19 7. 54 7 13 9. 62 12 8. 31 15 1. 31 18 3. 09 14 6. 88 17 7. 83 17 3. 81 17 2. 94 15 5. 54 16 1. 36 ra t9 9 2 72 8. 75 76 2. 19 78 9. 98 92 7. 36 76 7. 15 89 6. 72 91 6. 55 96 5. 94 89 0. 86 92 9. 82 3 58 7. 17 55 2. 09 57 9. 28 75 6. 08 62 0. 45 73 9. 43 80 2. 84 80 2. 88 84 3. 03 80 9. 90 5 46 9. 25 47 3. 66 50 2. 49 62 4. 38 52 5. 54 59 6. 87 66 8. 6 64 5. 91 67 5. 39 64 1. 89 7 44 3. 91 44 2. 47 47 1. 67 56 4. 14 49 2. 13 53 4. 91 55 4. 19 57 7. 00 56 5. 12 50 4. 71 ga p 1 1. 03 1. 08 1. 31 1. 09 1. 24 1. 30 1. 38 1. 31 1. 40
1. What is the focus of the paper regarding the mTSP problem? 2. What are the strengths of the proposed algorithm, particularly in handling the problem as a single RL problem? 3. Do you have any concerns about the numerical results and comparisons with other baselines? 4. How does the reviewer assess the novelty and practicality of the proposed approach in solving mTSP? 5. Are there any suggestions or recommendations for future improvements or research directions related to the topic?
Review
Review The mTSP with the goal of minimizing the longest route is considered, which minimizes the maximum route. This objective results in a balanced-length set of routes, which compared to the sum of routes obtains a more practical result. A graph representation based on the worker and assigned tasks is defined, then a type-aware graph attention (TGA) embedding procedure is proposed in which it obtains an embedding for node and edge representation. The state for each entity involves the 2D coordinates of the entity and the boolean indicator of idleness and assigned task of the worker. The action is worker-to-task assignment, and the reward is the makespan of finishing the problem, which is a sparse function. In type-aware graph attention (TGA) embedding, the embedding of node and edge are obtained and using the attention mechanism, an important weight for the embedding of each edge is obtained. The embedding functions are type-dependent which means the embedding considers the source node-type. The final message value for each node is obtained by multiplying the weight and edge embedding value of its neighbors. Then, an MLP is used to obtain a value for each source and possible target node, which takes the final value of the source, target, and edge between those as input. Then, the output of the MLP is used to get the final probability of choosing the next node. Major comment: The proposed algorithm looks interesting. Specifically, this problem can be modeled as multi-agent cooperative RL problem, and this paper suggests a model to handles it as a single RL problem. (For example, see paper [1] which suggests a multi-agent approach for a similar problem). However, the numerical results do not suggest a competitive algorithm yet and there are several baselines which obtain smaller objective in a smaller time, and yet it does not make sense to solve mTSP with a RL method. Note that this is not the case in VRP, since the current non-learning/non-commercial algorithms are not powerful in solving even medium-size problems with 50 nodes. [1] Zhang, Ke, et al. "Multi-Vehicle Routing Problems with Soft Time Windows: A Multi-Agent Reinforcement Learning Approach." arXiv preprint arXiv:2002.05513 (2020). minor comment: The citation of mTSPLib is missing.
ICLR
Title ScheduleNet: Learn to Solve MinMax mTSP Using Reinforcement Learning with Delayed Reward Abstract There has been continuous effort to learn to solve famous CO problems such as Traveling Salesman Problem (TSP) and Vehicle Routing Problem (VRP) using reinforcement learning (RL). Although they have shown good optimality and computational efficiency, these approaches have been limited to scheduling a singlevehicle. MinMax mTSP, the focus of this study, is the problem seeking to minimize the total completion time for multiple workers to complete the geographically distributed tasks. Solving MinMax mTSP using RL raises significant challenges because one needs to train a distributed scheduling policy inducing the cooperative strategic routings using only the single delayed and sparse reward signal (makespan). In this study, we propose the ScheduleNet that can solve mTSP with any numbers of salesmen and cities. The ScheduleNet presents a state (partial solution to mTSP) as a set of graphs and employs type aware graph node embeddings for deriving the cooperative and transferable scheduling policy. Additionally, to effectively train the ScheduleNet with sparse and delayed reward (makespan), we propose an RL training scheme, Clipped REINFORCE with ”target net,” which significantly stabilizes the training and improves the generalization performance. We have empirically shown that the proposed method achieves the performance comparable to Google OR-Tools, a highly optimized meta-heuristic baseline. 1 INTRODUCTION There have been numerous approaches to solve combinatorial optimization (CO) problems using machine learning. Bengio et al. (2020) have categorized these approaches into demonstration and experience. In demonstration setting, supervised learning has been employed to mimic the behavior of the existing expert (e.g., exact solvers or heuristics). On the other hand, in the experience setting, typically, reinforcement learning (RL) has been employed to learn a parameterized policy that can solve newly given target problems without direct supervision. While the demonstration policy cannot outperform its guiding expert, RL-based policy can outperform the expert because it improves its policy using a reward signal. Concurrently, Mazyavkina et al. (2020) have further categorized the RL approaches into improvement and construction heuristics. An improvement heuristics start from the arbitrary (complete) solution of the CO problem and iteratively improve it with the learned policy until the improvement stops (Chen & Tian, 2019; Ahn et al., 2019). On the other hand, the construction heuristics start from the empty solution and incrementally extend the partial solution using a learned sequential decision-making policy until it becomes complete. There has been continuous effort to learn to solve famous CO problems such as Traveling Salesman Problem (TSP) and Vehicle Routing Problem (VRP) using RL-based construction heuristics (Bello et al., 2016; Kool et al., 2018; Khalil et al., 2017; Nazari et al., 2018). Although they have shown good optimality and computational efficiency performance, these approaches have been limited to only scheduling a single-vehicle. The multi-extensions of these routing problems, such as multiple TSP and multiple VRP, are underrepresented in the deep learning research community, even though they capture a broader set of the real-world problems and pose a more significant scientific challenge. The multiple traveling salesmen problem (mTSP) aims to determine a set of subroutes for each salesman, given m salesmen and N cities that need to be visited by one of the salesmen, and a depot where salesmen are initially located and to which they return. The objective of a mTSP is either minimizing the sum of subroute lengths (MinSum) or minimizing the length of the longest subroute (MinMax). In general, the MinMax objective is more practical, as one seeks to visit all cities as soon as possible (i.e., total completion time minimization). In contrast, the MinSum formulation, in general, leads to highly imbalanced solutions where one of the salesmen visits most of the cities, which results in longer total completion time (Lupoaie et al., 2019). In this study, we propose a learning-based decentralized and sequential decision-making algorithm for solving Minmax mTSP problem; the trained policy, which is a construction heuristic, can be employed to solve mTSP instances with any numbers of salesman and cities. Learning a transferable mTSP solver in a construction heuristic framework is significantly challenging comparing to its single-agent variants (TSP and CVRP) because (1) we need to use the state representation that is flexible enough to represent any arbitrary number of salesman and cities (2) we need to introduce the coordination among multiple agents to complete the geographically distributed tasks as quickly as possible using a sequential and decentralized decision making strategy and (3) we need to learn such decentralized cooperative policy using only a delayed and sparse reward signal, makespan, that is revealed only at the end of the episode. To tackle such a challenging task, we formulate mTSP as a semi-MDP and derive a decentralized decision making policy in a multi-agent reinforcement learning framework using only a sparse and delayed episodic reward signal. The major components of the proposed method and their importance are summarized as follows: • Decentralized cooperative decision-making strategy: Decentralization of scheduling policy is essential to ensure the learned policy can be employed to schedule any size of mTSP problems in a scalable manner; decentralized policy maps local observation of each idle salesman one of feasible individual action while joint policy maps the global state to the joint scheduling actions. • State representation using type-award graph attention (TGA): the proposed method represents a state (partial solution to mTSP) as a set of graphs, each of which captures specific relationships among works, cities, and a depot. The proposed method then employs TGA to compute the node embeddings for all nodes (salesman and cities), which are used to assign idle salesman to an unvisited city sequentially. • Training decentralized policy using a single delayed shared reward signal: Training decentralized cooperative strategy using a single sparse and delayed reward is extremely difficult in that we need to distribute credits of a single scalar reward (makespan) over the time and agents. To resolve this, we propose a stable MARL training scheme which significantly stabilizes the training and improves the generalization performance. We have empirically shown that the proposed method achieves the performance comparable to Google OR-Tools, a highly optimized meta-heuristic baseline. The proposed approach outperforms OR-Tools in many cases on in-training, out-of-training problem distributions, and real-world problem instances. We also verified that scheduleNet can provide an efficient routing service to customers. 2 RELATED WORK Construction RL approaches A seminal body of work focused on the construction approach in the RL setting for solving CO problems (Bello et al., 2016; Nazari et al., 2018; Kool et al., 2018; Khalil et al., 2017). These approaches utilize encoder-decoder architecture, that encodes the problem structure into a hidden embedding first, and then autoregressively decodes the complete solution. Bello et al. (2016) utilized LSTM (Hochreiter & Schmidhuber, 1997) based encoder and decode the complete solution (tour) using Pointer Network (Vinyals et al., 2015) scheme. Since the routing tasks are often represented as graphs, Nazari et al. (2018) proposed an attention based encoder, while using LSTM decoder. Recently, Kool et al. (2018) proposed to use Transformer-like architecture (Vaswani et al., 2017) to solve several variants of TSP and single-vehicle CVRP. On the contrary, Khalil et al. (2017) do not use encoder-decoder architecture, but a single graph embedding model, structure2vec (Dai et al., 2016), that embeds a partial solution of the TSP and outputs the next city in the (sub)tour. (Kang et al., 2019) has extended structure2vec to random graph and employed this random graph embedding to solve identical parallel machine scheduling problems, the problem seeking to minimize the makespan by scheduling multiple machines. Learned mTSP solvers The machine learning approaches for solving mTSP date back to Hopfield & Tank (1985). However, these approaches require per problem instance training. (Hopfield & Tank, 1985; Wacholder et al., 1989; Somhom et al., 1999). Among the recent learning methods, Kaempfer & Wolf (2018) encodes MinSum mTSP with a set-specialized variant of Transformer architecture that uses permutation invariant pooling layers. To obtain the feasible solution, they use a combination of the softassign method Gold & Rangarajan (1996) and a beam search. Their model is trained in a supervised setting using mTSP solutions obtained by Integer Linear Programming (ILP) solver. Hu et al. (2020) utilizes a GNN encoder and self-attention Vaswani et al. (2017) policy outputs a probability of assignment to each salesman per city. Once cities are assigned to specific salesmen, they use existing TSP solver, OR-Tools (Perron & Furnon), to obtain each worker’s subroutes. Their method shows impressive scalability in terms of the number of cities, as they present results for mTSP instances with 1000 cities and ten workers. However, the trained model is not scalable in terms of the number of workers and can only solve mTSP problems with a pre-specified, fixed number of workers. 3 PROBLEM FORMULATION We define the set of m salesmen indexed by VT = {1, 2, ...,m}, and the set of N cities indexed by VC = {m + 1, 2, ...,m + N}. Following mTSP conventions, we define the first city as the depot. We also define the 2D-coordinates of entities (salesmen, cities, and the depot) as pi. The objective of MinMax mTSP is to minimize the length of the longest subtour of salesmen, while subtours covers all cities and all subtours of salesmen end at the depot. For the clarity of explanation, we will refer to salesman as a workers, and cities as a tasks. 3.1 MDP FORMULATION FOR MINMAX MTSP In this paper, the objective is to construct an optimal solution with a construction RL approach. Thus, we cast the solution construction process of MinMax mTSP as a Markov decision process (MDP). The components of the proposed MDP are as follows. Transition The proposed MDP transits based on events. We define an event as the the case where any worker reaches its assigned city. We enumerate the event with the index τ for avoiding confusion from the elapsed time of the mTSP problem. t(τ) is a function that returns the time of event τ . In the proposed event-based transition setup, the state transitions coincide with the sequential expansion of the partial scheduling solution. State Each entity i has its own state siτ = ( piτ ,1 active τ ,1 assigned τ ) at the τ -th event. the coordinates piτ is time-dependent for workers and static for tasks and the depot. Indicator 1activeτ describes whether the entity is active or inactive In case of tasks, inactive indicates that the task is already visited; in case of worker, inactive means that worker returned to the depot. Similarly, 1assignedτ indicates whether worker is assigned to a task or not. We also define the environment state senvτ that contains the current time of the environment, and the sequence of tasks visited by each worker, i.e., partial solution of the mTSP. The state sτ of the MDP at the τ -th event becomes sτ = ( {siτ}m+Ni=1 , senvτ ) . The first state s0 corresponds to the empty solution of the given problem instance, i.e., no cities have been visited, and all salesmen are in the depot. The terminal state sT corresponds to a complete solution of the given mTSP instance, i.e., when every task has been visited, and every worker returned to the depot (See Figure 1). Action A scheduling action aτ is defined as the worker-to-task assignment, i.e. salesman has to visit the assigned city. Reward. We formulate the problem in a delayed reward setting. Specifically, the sparse reward function is defined as r(sτ ) = 0 for all non-terminal events, and r(sT) = t(T), where T is the index of the terminal state. In other words, a single reward signal, which is obtained only for the terminal state, is equals to the makespan of the problem instance. 4 SCHEDULENET Given the MDP formulation for MinMax mTSP, we propose ScheduleNet that can recommend a scheduling action aτ given the current state Gτ represented as a graph, i.e., πθ(aτ |Gτ ). The SchedulNet first presents a state (partial solution of mTSP) as a set of graphs, each of which captures specific relationships among workers, tasks, and a depot. Then ScheduleNet employs type-aware graph attention (TGA) to compute the node embeddings and use the computed node embeddings to determine the next assignment action (See figure 2). 4.1 WORKER-TASK GRAPH REPRESENTATION Whenever an event occurs and the global state sτ of the MDP is updated at τ , ScheduleNet constructs a directed complete graph Gτ = (V,E) out of sτ , where V = VT ∪ VC is the set of nodes and E is the set of edges. We drop the time iterator τ to simplify the notations since the following operations only for the given time step. The nodes and edges and their associated features are defined as: • vi denotes the node corresponding entity i in mTSP problem. The node feature xi for vi is equal to the state siτ of entity i. In addition, ki denote the type of node vi. For instance, if the entity i is worker and its 1activeτ = 1, then the ki becomes active-worker type. • eij denotes the edge between between source node vj and destination node vi, representing the relationships between the two. The edge feature wij is equal to the Euclidean distance between the two nodes. 4.2 TYPE-AWARE GRAPH ATTENTION EMBEDDING In this section, we describe a type-aware graph attention (TGA) embedding procedure. We denote hi and hij as the node and the edge embedding, respectively, at a given time step, and h′i and h ′ ij as the updated embedding by TGA embedding. A single iteration of TGA embedding consists of three phases: (1) edge update, (2) message aggregation, and (3) node update. Type-aware Edge update Given the node embeddings hi for vi ∈ V and the edge embeddings hij for eij ∈ E, ScheduleNet computes the updated edge embedding h′ij and the attention logit zij as: h′ij = TGAE([hi, hj , hij ], kj) zij = TGAA([hi, hj , hij ], kj) (1) where TGAE and TGAA are, respectively, the type-aware edge update function and the type-aware attention function, which are defined for the specific type kj of the source node vj . The updated edge feature h′ij can be thought of as the message from the source node vj to the destination node vi, and the attention logit zij will be used to compute the importance of this message. In computing the updated edge feature (message), TGAE and TGAA first compute the “type-aware” edge encoding uij , which can be seen as a dynamic edge feature varying depending on the source node type, to effectively model the complex type-aware relationships among the nodes. Using the computed “type-aware” edge encoding uij , these two functions then compute the updated edge feature and attention logit using a multiplicative interaction (MI) layer (Jayakumar et al., 2019). The use of MI layer significantly reduces the number of parameters to learn without discarding the expressibility of the embedding procedure. The detailed architecture for TGAE and TGAA are provided in Appendix A.4. Type-aware Message aggregation The distribution of the node types in the mTSP graphs is highly imbalanced, i.e., the number of task-specific node types is much larger than the worker specific ones. This imbalance is problematic, specifically, during the message aggregation of GNN, since permutation invariant aggregation functions are akin to ignore messages from few-but-important nodes in the graph. To alleviate such an issue, we propose the following type-aware message aggregation scheme. We first define the type k neighborhood of node vi as the set of the k typed source nodes that are connected to the destination node vi, i.e., Nk(i) = {vl|kl = k, ∀vl ∈ N (i)}, where N (i) is the in-neighborhood set of node vi containing the nodes that are connected to node vi with incomingedges. The node vi aggregates separately messages from the same type of source nodes. For example, the aggregated message mki from k-type source nodes is computed as: mki = ∑ j∈Nk(i) αijh ′ ij (2) where αij is the attention score computed using the attention logits computed before as: αij = exp(zij)∑ j∈Nk(i) exp(zij) (3) Finally, all aggregated messages per type are concatenated to produce the total aggregated message mi for node vi as mi = concat({mki |k ∈ K}) (4) Type-aware Node update The aggregated message mi for node vi is then used to compute the updated node embedding h ′ i using the type-aware graph node update function TGAV as: h′i = TGAV([hi,mi], ki) (5) 4.3 ASSIGNMENT PROBABILITY COMPUTATION ScheduleNet model consists of two type-aware graph embedding layers that utilize the embedding procedure explained in the section above. The first embedding layer raw-2-hid is used to encode initial node and edge features xi and wij of the (full) graph Gτ , to obtain initial hidden node and edge features h(0)i and h (0) ij , respectively. We define the target subgraph Gsτ as the subset of nodes and edges from the original (full) graph Gτ that only includes a target-worker (unassigned-worker) node and all unassigned-city nodes. The second embedding layer hid-2-hid embeds the target subgraph Gsτ , H times. In other words, a hidden node and edge embeddings h(0)i and h (0) ij are iteratively updated H times to obtain final hidden embeddings h(H)i and h (H) ij , respectively. The final hidden embeddings are then used to make decision regarding the worker-to-task assignment. Specifically, probability of assigning target worker i to task j is computed as yij = MLPactor(h (H) i ;h (H) j ;h (H) ij ) pij = softmax({yij}j∈ A(Gτ )) (6) where the h(H)i , and h (H) ij is the final hidden node, edge embeddings, respectively. In addition, A(Gτ ) denote the set of feasible actions defined as {vj |kj = “Unassigned-task”∀j ∈ V}. 5 TRAINING SCHEDULENET In this section, we describe the training scheme of the ScheduleNet. Firstly, we explain reward normalization scheme which is used to reduce the variance of the reward. Secondly, we introduce a stable RL training scheme which significantly stabilizes the training process. Makespan normalization As mentioned in Section 3.1, we use the makespan of mTSP as the only reward signal for training RL agent. We denote the makespan of given policy π as M(π). We observe that, the makespan M(π) is a highly volatile depending on the problem size (number of cities and salesmen), the topology of the map, and the policy. To reduce the variance of the reward, we propose the following normalization scheme: m(π, πb) = M(πb)−M(π) M(πb) (7) where π and πb is the evaluation and baseline policy, respectively. The normalized makespan m(π, πb) is similar to (Kool et al., 2018), but we additionally divide the performance difference by the makespan of the baseline policy, which further reduces the variance that is induced by the size of the mTSP instance. From the normalized terminal reward m(π, πb), we compute the normalized return as follows: Gτ (π, πb) := γ T−τm(π, πb) (8) where T is the index of the terminal state, and γ is the discount factor. The normalized return Gτ (π, πb) becomes smaller and converges to (near) zero as τ decreases. From the perspective of the RL agent, it allows to the agent to acknowledge neutrality of current policy compared to the baseline policy for the early phase of the MDP trajectory. It is natural since knowing the relative goodness of the policy is hard from the early phase of the MDP. Stable RL training It is well known that the solution quality of CO problems, including the makespan of mTSP, is extremely sensitive to the action selection, and it thus prevents the stable policy learning. To address this problem, we propose the clipped REINFORCE, a variant PPO without the learned value function. We empirically found that it is hard to train the value function1, thus we use normalized returns Gτ(πθ, πb) directly. Then, the objective of the clipped REINFORCE is given as follows: L(θ) = E πθ [ T∑ τ=0 [min(clip(ρτ , 1− , 1 + )Gτ(πθ, πb), ρτGτ(πθ, πb))] ] (9) 1Note that the value function is trained to predict the makespan of the state to serve as an advantage estimator. Due to the combinatorial nature of the mTSP, the target of value function, makespan, is highly volatile, which makes training value function hard. We further discuss this in the experiment section. where ρτ = πθ(aτ |Gτ ) πθold(aτ |Gτ ) (10) and (Gτ , aτ ) ∼ πθ is the state-action marginal following πθ, and πθold is the old policy. Training detail We used the greedy version of current policy as the baseline policy πb. After updating the policy πθ, we smooth the parameters of policy πθ with the Polyak average (Polyak & Juditsky, 1992) to further stabilize policy training. The pseudo code of training and network architecture is given in Appedix A.5.1. 6 EXPERIMENTS We train the ScheduleNet using mTSP instances whose number m of workers and the number N of tasks are sampled from m ∼ U(2, 4) and N ∼ U(10, 20), respectively. This trained ScheduleNet policy is then evaluated on the various dataset, including randomly generated uniform mTSP datasets, mTSPLib (mTS), and randomly generated uniform TSP dataset, TSPLib, and TSP (dai). See Appendix for further training details. 6.1 MTSP RESULTS Random mTSP results We firstly investigate the generalization performance of ScheduleNet on the randomly generated uniform maps with varying numbers of tasks and workers. We report the results of OR-Tools and 2Phase heuristics; 2Phase Nearest Insertion (NI), 2Phase Farthest Insertion (FI), 2Phase Random Insertion (RI), and 2Phase Nearest Neighbor (NN). The 2Phase heuristics construct sub-tours by (1) clustering cities with clustering algorithm, and (2) applying the TSP heuristics within the cluster. The details of implementation are provided in the appendix. Table 1 shows that ScheduleNet in overall produces a slightly longer makespan than OR-Tools even for the large-sized mTSP instances. As the complexity of the target mTSP instance increases, the gap between ScheduleNet and OR-Tools decreases, even showing the cases where ScheduleNet outperforms OR-Tools. To further clarify, ScheduleNet has potentials for winning the OR-Tools on small and large cases as shown in the figure 3. This result empirically proves that ScheduleNet, even trained with small-sized mTSP instances, can solve large scale problems well. Notably, on the large scale maps, 2-Phase heuristics show their general effectiveness due to the uniformity of the city positions. It naturally invokes us to consider more realistic problems as discussed in the following section. mTSPLib results The trained ScheduleNet is employed to solve the benchmark problems in mTSPLib, without additional training, to validate the generalization capability of ScheduleNet on unseen mTSP instances, where the problem structure can be completely different from the instances used during training. Table 2 compares the performance of the ScheduleNet to other baseline models, including CPLEX (optimal solution), OR-Tools, and other meta-heuristics (Lupoaie et al., 2019); self-organization Map (SOM), ant-colony Optimization (ACO), and evolutionary algorithm (EA). We report the best known upper-bound for CPLEX results whenever the optimal solution is not known. OR-Tools generally shows promising results. Interestingly, OR-Tools also discovers the solution even better than the known upper-bounds. (e.g., eil76-m=5,7, rat99-m=5) That is possible for the large cases the search space of the exact method, CLPEX, becomes easily prohibitively large. Our method shows the second-best performance following OR-tools. The winning heuristic methods, 2Phase-NI/RI, shows drastic performance degradation on mTSPLib maps. It is noteworthy that our method, even in the zero-shot setting, performs better than the meta-heuristic methods, which perform optimization to solve each benchmark problem. Computational times The mixed-integer linear programming (MILP) formulated mTSP problem becomes quickly intractable due to the exponential growth of search space, namely subtour elim- ination constraint (SEC), as the number of workers increases. The computational gain of (Meta) heuristics, including the proposed method and OR-Tools, originates from the effective heuristics that trims out possible tours. The computational times of ScheduleNet linearly increase as the number of worker m increases for the number for the fixed number of task N due to the MDP formulation of mTSP. On the contrary, it is found that the computation times of OR-Tools depend on m and N , and also graph topology. As a result, the ScheduleNet becomes faster than OR-Tools for large instances as shown in figure 6. 6.2 EFFECTIVENESS OF THE PROPOSED TRAINING SCHEME Figure 5 compares the training curves of ScheduleNet and its variants. We firstly show the effectiveness of the proposed sparse reward compared to the dense reward functions; distance reward and distance-utilization reward. The distance reward is defined as the negative distance between the current worker position and the assigned city. This reward function is often used for solving TSP(Dai et al., 2016). The distance-utilization is defined as distance reward over the number of active workers. This reward function aims to minimize the (sub) tour distances while maximizing the utilization of the workers. The proposed sparse reward is the only reward function that can train ScheduleNet stable and achieves the minimal gaps, also as shown in 5 [Left]. We also validate the effectiveness of Clipped REINFORCE compared to the actor-critic counterpart, PPO. We use the same network architecture of the Clipped REINFORCE model for the actor and critic of the PPO model. Counter to the common belief, The actor-critic method (PPO) is not superior to the actor-only method (Clipped REINFORCE) as shown in 5 [Right]. We hypothesize this phenomenon is because the training target of the critic (sampled makepsan) is highly volatile and multimodal as visualized in Figure 4 and the value prediction error would deteriorate the policy due to the bellman error propagation in actor-critic setup as discussed in Fujimoto et al. (2018). 7 CONCLUSION We proposed ScheduleNet for solving MinMax mTSP, the problem seeking to minimize the total completion time for multiple workers to complete the geographically distributed tasks. The use of type-aware graphs and the specially designed TGA graph node embedding allows the trained ScheduleNet policy to induce the coordinated strate- gic subroutes of the workers and to be well transferred to unseen mTSP with any numbers of workers and tasks. We have empirically shown that the proposed method achieves the performance comparable to Google OR-Tools, a highly optimized meta-heuristic baseline. All in all, this study has shown the potential that the proposed ScheduleNet can be effectively used to schedule multiple vehicles for solving large-scale, practical, real-world applications. A APPENDIX A.1 DETAILS OF MDP TRANSITION AND GRAPH FORMULATION Event based MDP transition The formulated semi-MDP for ScheduleNet is event-based. Thus, whenever all workers are assigned to cities, the environment transits in time, until any of the workers arrives to the city (i.e. completes the task). Arrival of the worker to the city is the event trigger, meanwhile the other assigned workers are still on the way to their correspondingly assigned cities. We assume that each worker transits towards the assigned city with unit speed in the 2D Euclidean space, i.e. the distance travelled by each worker equals the time past between two consecutive MDP events. Graph formulation In total our graph formulation includes seven mutually exclusive node type: (1) assigned-worker, (2) unassigned-worker, (3) inactive-worker, (4) assigned-city, (5) unassignedcity, (6) inactive-city, and (7) depot. Here, the set of active workers (cities) is defined by the union of assigned and unassigned workers (cities). Inactive-city node refers to the city that has been already visited, while the inactive-worker node refers to the worker that has finished its route and returned to the depot. A.2 DETAILS OF IMPLEMENTATION 2phase mTSP heuristics 2phase heuristics for mTSP is an extension of well-known TSP heuristics to the m > 1 cases. First, we perform K-means spatial clustering of cities in the mTSP instance, where K = m. Next, we apply TSP insertion heuristics (Nearest Insertion, Farthest Insertion, Random Insertion, and Nearest Neighbour Insertion) for each cluster of cities. It should be noted that, performance of the 2phase heuristics is highly depended on the spatial distribution of the cities on the map. Thus 2phase heuristics perform particularly well on uniformly distributed random instances, where K-means clustering can obtain clusters with approximately same number of cities per cluster. Proximal Policy Optimization Our implementation of PPO closely the standard implementation of PPO2 from stable-baselines (Hill et al., 2018) with default hyperparameters, with modifications to allow for distributed training with Parameter Server. A.3 COMPUTATION TIME Figure 6 shows the computation time curves as the function of number of cities (left), and number of workers (right). Overall, ScheduleNet is faster than OR-Tools, and the difference in computation speed only increases with the problem size. Additionally, ScheduleNet’s computation time depends only on the problem size (N + m), whereas the computation time of OR-Tools on both the size of the problem and the topology of the underlying mTSP instance. In other words, the number of solutions searched by OR-Tools vary depending on the underlying problem. Another computational and practical advantage of the ScheduleNet is its invariance to the number of workers. Computational complexity of ScheduleNet increases linearly with the number workers. On the other hand, the search space of meta-heuristic algorithms drastically increase with the number of workers, possibly, due to the exponentially increasing number Subtour Elimination Constraints (SEC). Particularly, we investigated that OR-Tools decreases the search space, by deactivating part of the workers, i.e. not utilizing all possible partial solutions (subtours). As a result, Figure 6 shows that computation time of the OR-Tools actually decrease due to deactivation part of workers, at the expense of the decreasing solution quality. A.4 DETAILS OF TYPE-AWARE GRAPH ATTENTION EMBEDDING In this section, we thoroughly describe a type-aware graph embedding procedure. Similar to the main body, We overload notations for the simplicity of notation such that the input node and edge feature as hi and hij , and the embedded node and edge feature h′i and h ′ ij , respectively. The proposed graph embedding step consists of three phases: (1) type-aware edge update, (2) typeaware message aggregation, and (3) type-aware node update. Type-aware Edge update The edge update scheme is designed to reflect the complex type relationship among the entities while updating edge features. First the context embedding cij of edge eij computed using the source node type kj such that: cij = MLPetype(kj) (11) where MLPetype is the edge type encoder. The source node types are embedded into the context embedding cij using MLPetype. Next, the type-aware edge encoding uij is computed using the Multiplicative Interaction (MI) layer (Jayakumar et al., 2019) as follows: uij = MIedge([hi;hj ;hij ], cij) (12) where MIedge is the edge MI layer. We utilize the MI layer, which dynamically generates its parameter depending on the context cij and produces “type-aware” edge encoding uij , to effectively model the complex type relationships among the nodes. “type-aware” edge encoding uij can be seen as a dynamic edge feature which varies depending on the source node type. After the updated edge embedding h′ij and its attention logit zij is obtained as: h′ij = MLPedge(uij) (13) zij = MLPattn(uij) (14) where MLPedge and MLPattn is the edge updater and logit function, respectively. the edge updater and logit function produces updated edge embdding and logits from the “type-aware” edge. The computation steps of equation 11, 12, and 13 are defined as TGAE. Similarly, the computation steps of equation 11, 12, and 14 are defined as TGAA. Message aggregation First, we define the type-k neighborhood of node vi such that Nk(i) = {vl|kl = k, ∀vl ∈ N (i)}, where N (i) is the in-neigborhood set of node i. i.e., The type-k neighborhood is the set of edges heading to node i, and their source nodes have type k. The proposed type-aware message aggregation procedure computes attention score αij for the eij , which starts from node j and heads to node i, such that: αij = exp(zij)∑ l∈Nkj (i) exp(zil) (15) Intuitively speaking, The proposed attention scheme normalizes the attention logits of incoming edges over the types. Therefore, the attention scores sum up to 1 over each type-k neighborhood. Next, the type-k neighborhood message mi,k for node vi is computed as: mki = ∑ j∈Nk(i) αijh ′ ij (16) In this aggregation step, the incoming messages of node i are aggregated type-wisely. Finally, all incoming type neighborhood messages are concatenated to produce (inter-type) aggregated message mi for node vi, such that: mi = concat({mki |k ∈ K}) (17) Node update Similar to the edge update phase, first, the context embedding ci is computed for each node vi: ci = MLPntype(ki) (18) where MLPntype is the node type encoder. Then, the updated hidden node embedding h′i is computed as below: h′i = MLPnode(hi, ui) (19) where ui = MInode(mi, ci) is the type-aware node embedding that is produced by MInode layer using aggregated messages mi and the context embedding ci. The computation steps of equation 18, and 19 are defined as TGAE. The overall computation procedure TGA is illustrated in Figure 7. A.5 DETAILS OF SCHEDULENET TRAINING A.5.1 TRAINING PSEUDO CODE In this section, we presents a pseudocode for training ScheduleNet. Algorithm 1: ScheduleNet Training Input: Training policy πθ Output: Smoothed policy πφ 1 Initialize smoothed policy with parameters φ← θ. 2 for update step do 3 Generate a random mTSP instance I 4 for number of episodes do 5 Construct mTSP MDP from the instance I 6 πb ← argmax(πθ) 7 Collect samples with πθ and πb from the mTSP MDP. 8 πθold ← πθ 9 for inner updates K do 10 θ ← θ + α∇θL(θ) 11 φ← βφ+ (1− β)θ A.5.2 HYPERPARAMETERS In this section, we fully explain hyperparameters of ScheduleNet. Network Architecture We use the same hyperparmeters for the raw-2-hid TGA layer and the hid2-hid TGA layer. MLPetype and MLPntype has one hidden layer with 32 neurons and their output dimensions are both 32. Both MI layers has 64 dimensional outputs. MLPedge, MLPattn, and MLPnode has 2 hidden layers with 32 neurons. MLPactor has 2 hidden layers and the hidden layers has 128 neurons each. We use ReLU activation functions for all hidden layers. The hidden graph embedding step H is two. Training We use the discount factor γ of 0.7. We use Adam (Kingma & Ba, 2014) with learning rate value of 0.001. We set the clipping parameter as 0.2. We sample 40 independent mTSP trajectories per gradient update. We clipped the maximum gradient whenever the norm of gradient is larger than 0.5. Inner updates steps K is three. The smoothing parameter β is 0.95. A.6 TRANSFERABILITY TEST ON TSP (m = 1) The trained ScheduleNet has been employed to solve random TSP instances. Because ScheduleNet can be used to schedule any m number of workers, if we set m = 1, it can be used to schedule TSP instance without further training. Table 3 shows the results on this transferability experiments. Table 3 shows that the trained ScheduleNet can solve reasonably well on random TSP instances, although ScheduleNet has never been exposed to such TSP instances. Note that as the size of TSP increases, the gap between the ScheduleNet and other models becomes smaller. If the ScheduleNet is trained with TSP instances with m = 1, the performance can be further improved. However, we did not try that experiment to check its transferability over different types of routing problems with different objectives. A.7 EXTENDED MTSPLIB RESULTS M et ahe ur is tic s H eu ri st ic s m C PL E X O R SN SO M A C O E A 2p ha se N I 2p ha se FI 2p ha se R I 2p ha se N N ei l5 1 2 22 2. 73 24 3. 02 25 9. 67 27 8. 44 24 8. 76 27 6. 62 27 1. 25 31 1. 26 26 5. 85 38 7. 20 3 15 9. 57 17 0. 05 17 2. 16 21 0. 25 18 0. 59 20 8. 16 20 2. 85 21 8. 71 19 5. 89 22 2. 13 5 12 3. 96 12 7. 5 11 8. 94 15 7. 68 13 5. 09 15 1. 21 18 3. 53 18 0. 21 15 0. 49 21 0. 75 7 11 2. 07 11 2. 07 11 2. 42 13 6. 84 11 9. 96 12 3. 88 12 9. 65 14 4. 11 12 7. 72 14 7. 81 be rl in 2 41 10 .2 1 46 65 .4 7 48 16 .3 53 50 .8 3 43 88 .9 9 50 38 .3 3 59 41 .0 3 66 05 .0 5 57 85 .0 0 65 19 .5 1 3 32 44 .3 7 33 11 .3 1 33 72 .1 4 41 97 .6 1 34 68 .9 38 65 .4 5 38 11 .4 9 40 37 .1 0 41 33 .8 5 35 81 .2 1 5 24 41 .3 9 24 82 .5 7 26 15 .5 7 34 61 .9 3 27 33 .5 6 28 53 .6 3 29 72 .5 7 40 37 .1 0 41 08 .5 8 35 81 .2 1 7 24 40 .9 2 24 40 .9 2 25 76 .0 4 31 25 .2 1 25 10 .0 9 25 43 .7 3 29 72 .5 7 30 33 .0 0 29 98 .2 0 31 98 .1 8 ei l7 6 2 28 0. 85 31 8 33 4. 1 36 4. 02 30 8. 53 36 5. 72 36 3. 21 40 3. 56 39 5. 15 37 3. 75 3 19 7. 34 21 2. 41 22 6. 54 27 8. 63 22 4. 56 28 5. 43 30 2. 1 27 9. 33 27 6. 58 35 7. 77 5 15 0. 3 14 3. 38 16 8. 03 21 0. 69 16 3. 93 21 1. 91 19 1. 41 20 4. 16 18 5. 77 19 7. 54 7 13 9. 62 12 8. 31 15 1. 31 18 3. 09 14 6. 88 17 7. 83 17 3. 81 17 2. 94 15 5. 54 16 1. 36 ra t9 9 2 72 8. 75 76 2. 19 78 9. 98 92 7. 36 76 7. 15 89 6. 72 91 6. 55 96 5. 94 89 0. 86 92 9. 82 3 58 7. 17 55 2. 09 57 9. 28 75 6. 08 62 0. 45 73 9. 43 80 2. 84 80 2. 88 84 3. 03 80 9. 90 5 46 9. 25 47 3. 66 50 2. 49 62 4. 38 52 5. 54 59 6. 87 66 8. 6 64 5. 91 67 5. 39 64 1. 89 7 44 3. 91 44 2. 47 47 1. 67 56 4. 14 49 2. 13 53 4. 91 55 4. 19 57 7. 00 56 5. 12 50 4. 71 ga p 1 1. 03 1. 08 1. 31 1. 09 1. 24 1. 30 1. 38 1. 31 1. 40
1. What is the focus of the paper on reinforcement learning? 2. What are the strengths and weaknesses of the proposed RL framework, particularly in its application to the minimax multiple traveling salesman problem? 3. Do you have any concerns regarding the novelty of the proposed method, especially compared to existing works such as OR-Tool? 4. How can the authors improve their research to better demonstrate the effectiveness and uniqueness of their approach? 5. Are there any questions or areas that require further investigation in the context of this paper?
Review
Review The authors propose an RL framework, called ScheduleNet trained by clipped REINFORCE, for minmax multiple traveling salesman problem (minimax mTSP), which uses a clipping idea to stabilize learning process as PPO does. The authors empirically show the feasibility of the proposed framework. Unfortunately, the proposed method has poorer performance than existing works, in particular, OR-Tool. This decreases the merit significantly. In addition, it is hard to find contribution from proposing new RL method since the stabilizing effect of the clipped REINFORCE is shown in only limited environment (only minimax mTSP). Table 2 is not completed. The nature of "minimax" mTSP should be more clearly represented and exploited. Currently, the proposed RL framework seems to work for other formation of mTSP, and also it seems not to exploit the nature of minimax. In order for showing novelty of the proposed method, it might be useful to devise and investigate actor-critic methods sharing the main idea. In the submission, only footnote 1 simply mentions the hardness of learning value function. The behavior of ScheduleNet need to be studied further, e.g., when your algorithm works well, and not.
ICLR
Title ScheduleNet: Learn to Solve MinMax mTSP Using Reinforcement Learning with Delayed Reward Abstract There has been continuous effort to learn to solve famous CO problems such as Traveling Salesman Problem (TSP) and Vehicle Routing Problem (VRP) using reinforcement learning (RL). Although they have shown good optimality and computational efficiency, these approaches have been limited to scheduling a singlevehicle. MinMax mTSP, the focus of this study, is the problem seeking to minimize the total completion time for multiple workers to complete the geographically distributed tasks. Solving MinMax mTSP using RL raises significant challenges because one needs to train a distributed scheduling policy inducing the cooperative strategic routings using only the single delayed and sparse reward signal (makespan). In this study, we propose the ScheduleNet that can solve mTSP with any numbers of salesmen and cities. The ScheduleNet presents a state (partial solution to mTSP) as a set of graphs and employs type aware graph node embeddings for deriving the cooperative and transferable scheduling policy. Additionally, to effectively train the ScheduleNet with sparse and delayed reward (makespan), we propose an RL training scheme, Clipped REINFORCE with ”target net,” which significantly stabilizes the training and improves the generalization performance. We have empirically shown that the proposed method achieves the performance comparable to Google OR-Tools, a highly optimized meta-heuristic baseline. 1 INTRODUCTION There have been numerous approaches to solve combinatorial optimization (CO) problems using machine learning. Bengio et al. (2020) have categorized these approaches into demonstration and experience. In demonstration setting, supervised learning has been employed to mimic the behavior of the existing expert (e.g., exact solvers or heuristics). On the other hand, in the experience setting, typically, reinforcement learning (RL) has been employed to learn a parameterized policy that can solve newly given target problems without direct supervision. While the demonstration policy cannot outperform its guiding expert, RL-based policy can outperform the expert because it improves its policy using a reward signal. Concurrently, Mazyavkina et al. (2020) have further categorized the RL approaches into improvement and construction heuristics. An improvement heuristics start from the arbitrary (complete) solution of the CO problem and iteratively improve it with the learned policy until the improvement stops (Chen & Tian, 2019; Ahn et al., 2019). On the other hand, the construction heuristics start from the empty solution and incrementally extend the partial solution using a learned sequential decision-making policy until it becomes complete. There has been continuous effort to learn to solve famous CO problems such as Traveling Salesman Problem (TSP) and Vehicle Routing Problem (VRP) using RL-based construction heuristics (Bello et al., 2016; Kool et al., 2018; Khalil et al., 2017; Nazari et al., 2018). Although they have shown good optimality and computational efficiency performance, these approaches have been limited to only scheduling a single-vehicle. The multi-extensions of these routing problems, such as multiple TSP and multiple VRP, are underrepresented in the deep learning research community, even though they capture a broader set of the real-world problems and pose a more significant scientific challenge. The multiple traveling salesmen problem (mTSP) aims to determine a set of subroutes for each salesman, given m salesmen and N cities that need to be visited by one of the salesmen, and a depot where salesmen are initially located and to which they return. The objective of a mTSP is either minimizing the sum of subroute lengths (MinSum) or minimizing the length of the longest subroute (MinMax). In general, the MinMax objective is more practical, as one seeks to visit all cities as soon as possible (i.e., total completion time minimization). In contrast, the MinSum formulation, in general, leads to highly imbalanced solutions where one of the salesmen visits most of the cities, which results in longer total completion time (Lupoaie et al., 2019). In this study, we propose a learning-based decentralized and sequential decision-making algorithm for solving Minmax mTSP problem; the trained policy, which is a construction heuristic, can be employed to solve mTSP instances with any numbers of salesman and cities. Learning a transferable mTSP solver in a construction heuristic framework is significantly challenging comparing to its single-agent variants (TSP and CVRP) because (1) we need to use the state representation that is flexible enough to represent any arbitrary number of salesman and cities (2) we need to introduce the coordination among multiple agents to complete the geographically distributed tasks as quickly as possible using a sequential and decentralized decision making strategy and (3) we need to learn such decentralized cooperative policy using only a delayed and sparse reward signal, makespan, that is revealed only at the end of the episode. To tackle such a challenging task, we formulate mTSP as a semi-MDP and derive a decentralized decision making policy in a multi-agent reinforcement learning framework using only a sparse and delayed episodic reward signal. The major components of the proposed method and their importance are summarized as follows: • Decentralized cooperative decision-making strategy: Decentralization of scheduling policy is essential to ensure the learned policy can be employed to schedule any size of mTSP problems in a scalable manner; decentralized policy maps local observation of each idle salesman one of feasible individual action while joint policy maps the global state to the joint scheduling actions. • State representation using type-award graph attention (TGA): the proposed method represents a state (partial solution to mTSP) as a set of graphs, each of which captures specific relationships among works, cities, and a depot. The proposed method then employs TGA to compute the node embeddings for all nodes (salesman and cities), which are used to assign idle salesman to an unvisited city sequentially. • Training decentralized policy using a single delayed shared reward signal: Training decentralized cooperative strategy using a single sparse and delayed reward is extremely difficult in that we need to distribute credits of a single scalar reward (makespan) over the time and agents. To resolve this, we propose a stable MARL training scheme which significantly stabilizes the training and improves the generalization performance. We have empirically shown that the proposed method achieves the performance comparable to Google OR-Tools, a highly optimized meta-heuristic baseline. The proposed approach outperforms OR-Tools in many cases on in-training, out-of-training problem distributions, and real-world problem instances. We also verified that scheduleNet can provide an efficient routing service to customers. 2 RELATED WORK Construction RL approaches A seminal body of work focused on the construction approach in the RL setting for solving CO problems (Bello et al., 2016; Nazari et al., 2018; Kool et al., 2018; Khalil et al., 2017). These approaches utilize encoder-decoder architecture, that encodes the problem structure into a hidden embedding first, and then autoregressively decodes the complete solution. Bello et al. (2016) utilized LSTM (Hochreiter & Schmidhuber, 1997) based encoder and decode the complete solution (tour) using Pointer Network (Vinyals et al., 2015) scheme. Since the routing tasks are often represented as graphs, Nazari et al. (2018) proposed an attention based encoder, while using LSTM decoder. Recently, Kool et al. (2018) proposed to use Transformer-like architecture (Vaswani et al., 2017) to solve several variants of TSP and single-vehicle CVRP. On the contrary, Khalil et al. (2017) do not use encoder-decoder architecture, but a single graph embedding model, structure2vec (Dai et al., 2016), that embeds a partial solution of the TSP and outputs the next city in the (sub)tour. (Kang et al., 2019) has extended structure2vec to random graph and employed this random graph embedding to solve identical parallel machine scheduling problems, the problem seeking to minimize the makespan by scheduling multiple machines. Learned mTSP solvers The machine learning approaches for solving mTSP date back to Hopfield & Tank (1985). However, these approaches require per problem instance training. (Hopfield & Tank, 1985; Wacholder et al., 1989; Somhom et al., 1999). Among the recent learning methods, Kaempfer & Wolf (2018) encodes MinSum mTSP with a set-specialized variant of Transformer architecture that uses permutation invariant pooling layers. To obtain the feasible solution, they use a combination of the softassign method Gold & Rangarajan (1996) and a beam search. Their model is trained in a supervised setting using mTSP solutions obtained by Integer Linear Programming (ILP) solver. Hu et al. (2020) utilizes a GNN encoder and self-attention Vaswani et al. (2017) policy outputs a probability of assignment to each salesman per city. Once cities are assigned to specific salesmen, they use existing TSP solver, OR-Tools (Perron & Furnon), to obtain each worker’s subroutes. Their method shows impressive scalability in terms of the number of cities, as they present results for mTSP instances with 1000 cities and ten workers. However, the trained model is not scalable in terms of the number of workers and can only solve mTSP problems with a pre-specified, fixed number of workers. 3 PROBLEM FORMULATION We define the set of m salesmen indexed by VT = {1, 2, ...,m}, and the set of N cities indexed by VC = {m + 1, 2, ...,m + N}. Following mTSP conventions, we define the first city as the depot. We also define the 2D-coordinates of entities (salesmen, cities, and the depot) as pi. The objective of MinMax mTSP is to minimize the length of the longest subtour of salesmen, while subtours covers all cities and all subtours of salesmen end at the depot. For the clarity of explanation, we will refer to salesman as a workers, and cities as a tasks. 3.1 MDP FORMULATION FOR MINMAX MTSP In this paper, the objective is to construct an optimal solution with a construction RL approach. Thus, we cast the solution construction process of MinMax mTSP as a Markov decision process (MDP). The components of the proposed MDP are as follows. Transition The proposed MDP transits based on events. We define an event as the the case where any worker reaches its assigned city. We enumerate the event with the index τ for avoiding confusion from the elapsed time of the mTSP problem. t(τ) is a function that returns the time of event τ . In the proposed event-based transition setup, the state transitions coincide with the sequential expansion of the partial scheduling solution. State Each entity i has its own state siτ = ( piτ ,1 active τ ,1 assigned τ ) at the τ -th event. the coordinates piτ is time-dependent for workers and static for tasks and the depot. Indicator 1activeτ describes whether the entity is active or inactive In case of tasks, inactive indicates that the task is already visited; in case of worker, inactive means that worker returned to the depot. Similarly, 1assignedτ indicates whether worker is assigned to a task or not. We also define the environment state senvτ that contains the current time of the environment, and the sequence of tasks visited by each worker, i.e., partial solution of the mTSP. The state sτ of the MDP at the τ -th event becomes sτ = ( {siτ}m+Ni=1 , senvτ ) . The first state s0 corresponds to the empty solution of the given problem instance, i.e., no cities have been visited, and all salesmen are in the depot. The terminal state sT corresponds to a complete solution of the given mTSP instance, i.e., when every task has been visited, and every worker returned to the depot (See Figure 1). Action A scheduling action aτ is defined as the worker-to-task assignment, i.e. salesman has to visit the assigned city. Reward. We formulate the problem in a delayed reward setting. Specifically, the sparse reward function is defined as r(sτ ) = 0 for all non-terminal events, and r(sT) = t(T), where T is the index of the terminal state. In other words, a single reward signal, which is obtained only for the terminal state, is equals to the makespan of the problem instance. 4 SCHEDULENET Given the MDP formulation for MinMax mTSP, we propose ScheduleNet that can recommend a scheduling action aτ given the current state Gτ represented as a graph, i.e., πθ(aτ |Gτ ). The SchedulNet first presents a state (partial solution of mTSP) as a set of graphs, each of which captures specific relationships among workers, tasks, and a depot. Then ScheduleNet employs type-aware graph attention (TGA) to compute the node embeddings and use the computed node embeddings to determine the next assignment action (See figure 2). 4.1 WORKER-TASK GRAPH REPRESENTATION Whenever an event occurs and the global state sτ of the MDP is updated at τ , ScheduleNet constructs a directed complete graph Gτ = (V,E) out of sτ , where V = VT ∪ VC is the set of nodes and E is the set of edges. We drop the time iterator τ to simplify the notations since the following operations only for the given time step. The nodes and edges and their associated features are defined as: • vi denotes the node corresponding entity i in mTSP problem. The node feature xi for vi is equal to the state siτ of entity i. In addition, ki denote the type of node vi. For instance, if the entity i is worker and its 1activeτ = 1, then the ki becomes active-worker type. • eij denotes the edge between between source node vj and destination node vi, representing the relationships between the two. The edge feature wij is equal to the Euclidean distance between the two nodes. 4.2 TYPE-AWARE GRAPH ATTENTION EMBEDDING In this section, we describe a type-aware graph attention (TGA) embedding procedure. We denote hi and hij as the node and the edge embedding, respectively, at a given time step, and h′i and h ′ ij as the updated embedding by TGA embedding. A single iteration of TGA embedding consists of three phases: (1) edge update, (2) message aggregation, and (3) node update. Type-aware Edge update Given the node embeddings hi for vi ∈ V and the edge embeddings hij for eij ∈ E, ScheduleNet computes the updated edge embedding h′ij and the attention logit zij as: h′ij = TGAE([hi, hj , hij ], kj) zij = TGAA([hi, hj , hij ], kj) (1) where TGAE and TGAA are, respectively, the type-aware edge update function and the type-aware attention function, which are defined for the specific type kj of the source node vj . The updated edge feature h′ij can be thought of as the message from the source node vj to the destination node vi, and the attention logit zij will be used to compute the importance of this message. In computing the updated edge feature (message), TGAE and TGAA first compute the “type-aware” edge encoding uij , which can be seen as a dynamic edge feature varying depending on the source node type, to effectively model the complex type-aware relationships among the nodes. Using the computed “type-aware” edge encoding uij , these two functions then compute the updated edge feature and attention logit using a multiplicative interaction (MI) layer (Jayakumar et al., 2019). The use of MI layer significantly reduces the number of parameters to learn without discarding the expressibility of the embedding procedure. The detailed architecture for TGAE and TGAA are provided in Appendix A.4. Type-aware Message aggregation The distribution of the node types in the mTSP graphs is highly imbalanced, i.e., the number of task-specific node types is much larger than the worker specific ones. This imbalance is problematic, specifically, during the message aggregation of GNN, since permutation invariant aggregation functions are akin to ignore messages from few-but-important nodes in the graph. To alleviate such an issue, we propose the following type-aware message aggregation scheme. We first define the type k neighborhood of node vi as the set of the k typed source nodes that are connected to the destination node vi, i.e., Nk(i) = {vl|kl = k, ∀vl ∈ N (i)}, where N (i) is the in-neighborhood set of node vi containing the nodes that are connected to node vi with incomingedges. The node vi aggregates separately messages from the same type of source nodes. For example, the aggregated message mki from k-type source nodes is computed as: mki = ∑ j∈Nk(i) αijh ′ ij (2) where αij is the attention score computed using the attention logits computed before as: αij = exp(zij)∑ j∈Nk(i) exp(zij) (3) Finally, all aggregated messages per type are concatenated to produce the total aggregated message mi for node vi as mi = concat({mki |k ∈ K}) (4) Type-aware Node update The aggregated message mi for node vi is then used to compute the updated node embedding h ′ i using the type-aware graph node update function TGAV as: h′i = TGAV([hi,mi], ki) (5) 4.3 ASSIGNMENT PROBABILITY COMPUTATION ScheduleNet model consists of two type-aware graph embedding layers that utilize the embedding procedure explained in the section above. The first embedding layer raw-2-hid is used to encode initial node and edge features xi and wij of the (full) graph Gτ , to obtain initial hidden node and edge features h(0)i and h (0) ij , respectively. We define the target subgraph Gsτ as the subset of nodes and edges from the original (full) graph Gτ that only includes a target-worker (unassigned-worker) node and all unassigned-city nodes. The second embedding layer hid-2-hid embeds the target subgraph Gsτ , H times. In other words, a hidden node and edge embeddings h(0)i and h (0) ij are iteratively updated H times to obtain final hidden embeddings h(H)i and h (H) ij , respectively. The final hidden embeddings are then used to make decision regarding the worker-to-task assignment. Specifically, probability of assigning target worker i to task j is computed as yij = MLPactor(h (H) i ;h (H) j ;h (H) ij ) pij = softmax({yij}j∈ A(Gτ )) (6) where the h(H)i , and h (H) ij is the final hidden node, edge embeddings, respectively. In addition, A(Gτ ) denote the set of feasible actions defined as {vj |kj = “Unassigned-task”∀j ∈ V}. 5 TRAINING SCHEDULENET In this section, we describe the training scheme of the ScheduleNet. Firstly, we explain reward normalization scheme which is used to reduce the variance of the reward. Secondly, we introduce a stable RL training scheme which significantly stabilizes the training process. Makespan normalization As mentioned in Section 3.1, we use the makespan of mTSP as the only reward signal for training RL agent. We denote the makespan of given policy π as M(π). We observe that, the makespan M(π) is a highly volatile depending on the problem size (number of cities and salesmen), the topology of the map, and the policy. To reduce the variance of the reward, we propose the following normalization scheme: m(π, πb) = M(πb)−M(π) M(πb) (7) where π and πb is the evaluation and baseline policy, respectively. The normalized makespan m(π, πb) is similar to (Kool et al., 2018), but we additionally divide the performance difference by the makespan of the baseline policy, which further reduces the variance that is induced by the size of the mTSP instance. From the normalized terminal reward m(π, πb), we compute the normalized return as follows: Gτ (π, πb) := γ T−τm(π, πb) (8) where T is the index of the terminal state, and γ is the discount factor. The normalized return Gτ (π, πb) becomes smaller and converges to (near) zero as τ decreases. From the perspective of the RL agent, it allows to the agent to acknowledge neutrality of current policy compared to the baseline policy for the early phase of the MDP trajectory. It is natural since knowing the relative goodness of the policy is hard from the early phase of the MDP. Stable RL training It is well known that the solution quality of CO problems, including the makespan of mTSP, is extremely sensitive to the action selection, and it thus prevents the stable policy learning. To address this problem, we propose the clipped REINFORCE, a variant PPO without the learned value function. We empirically found that it is hard to train the value function1, thus we use normalized returns Gτ(πθ, πb) directly. Then, the objective of the clipped REINFORCE is given as follows: L(θ) = E πθ [ T∑ τ=0 [min(clip(ρτ , 1− , 1 + )Gτ(πθ, πb), ρτGτ(πθ, πb))] ] (9) 1Note that the value function is trained to predict the makespan of the state to serve as an advantage estimator. Due to the combinatorial nature of the mTSP, the target of value function, makespan, is highly volatile, which makes training value function hard. We further discuss this in the experiment section. where ρτ = πθ(aτ |Gτ ) πθold(aτ |Gτ ) (10) and (Gτ , aτ ) ∼ πθ is the state-action marginal following πθ, and πθold is the old policy. Training detail We used the greedy version of current policy as the baseline policy πb. After updating the policy πθ, we smooth the parameters of policy πθ with the Polyak average (Polyak & Juditsky, 1992) to further stabilize policy training. The pseudo code of training and network architecture is given in Appedix A.5.1. 6 EXPERIMENTS We train the ScheduleNet using mTSP instances whose number m of workers and the number N of tasks are sampled from m ∼ U(2, 4) and N ∼ U(10, 20), respectively. This trained ScheduleNet policy is then evaluated on the various dataset, including randomly generated uniform mTSP datasets, mTSPLib (mTS), and randomly generated uniform TSP dataset, TSPLib, and TSP (dai). See Appendix for further training details. 6.1 MTSP RESULTS Random mTSP results We firstly investigate the generalization performance of ScheduleNet on the randomly generated uniform maps with varying numbers of tasks and workers. We report the results of OR-Tools and 2Phase heuristics; 2Phase Nearest Insertion (NI), 2Phase Farthest Insertion (FI), 2Phase Random Insertion (RI), and 2Phase Nearest Neighbor (NN). The 2Phase heuristics construct sub-tours by (1) clustering cities with clustering algorithm, and (2) applying the TSP heuristics within the cluster. The details of implementation are provided in the appendix. Table 1 shows that ScheduleNet in overall produces a slightly longer makespan than OR-Tools even for the large-sized mTSP instances. As the complexity of the target mTSP instance increases, the gap between ScheduleNet and OR-Tools decreases, even showing the cases where ScheduleNet outperforms OR-Tools. To further clarify, ScheduleNet has potentials for winning the OR-Tools on small and large cases as shown in the figure 3. This result empirically proves that ScheduleNet, even trained with small-sized mTSP instances, can solve large scale problems well. Notably, on the large scale maps, 2-Phase heuristics show their general effectiveness due to the uniformity of the city positions. It naturally invokes us to consider more realistic problems as discussed in the following section. mTSPLib results The trained ScheduleNet is employed to solve the benchmark problems in mTSPLib, without additional training, to validate the generalization capability of ScheduleNet on unseen mTSP instances, where the problem structure can be completely different from the instances used during training. Table 2 compares the performance of the ScheduleNet to other baseline models, including CPLEX (optimal solution), OR-Tools, and other meta-heuristics (Lupoaie et al., 2019); self-organization Map (SOM), ant-colony Optimization (ACO), and evolutionary algorithm (EA). We report the best known upper-bound for CPLEX results whenever the optimal solution is not known. OR-Tools generally shows promising results. Interestingly, OR-Tools also discovers the solution even better than the known upper-bounds. (e.g., eil76-m=5,7, rat99-m=5) That is possible for the large cases the search space of the exact method, CLPEX, becomes easily prohibitively large. Our method shows the second-best performance following OR-tools. The winning heuristic methods, 2Phase-NI/RI, shows drastic performance degradation on mTSPLib maps. It is noteworthy that our method, even in the zero-shot setting, performs better than the meta-heuristic methods, which perform optimization to solve each benchmark problem. Computational times The mixed-integer linear programming (MILP) formulated mTSP problem becomes quickly intractable due to the exponential growth of search space, namely subtour elim- ination constraint (SEC), as the number of workers increases. The computational gain of (Meta) heuristics, including the proposed method and OR-Tools, originates from the effective heuristics that trims out possible tours. The computational times of ScheduleNet linearly increase as the number of worker m increases for the number for the fixed number of task N due to the MDP formulation of mTSP. On the contrary, it is found that the computation times of OR-Tools depend on m and N , and also graph topology. As a result, the ScheduleNet becomes faster than OR-Tools for large instances as shown in figure 6. 6.2 EFFECTIVENESS OF THE PROPOSED TRAINING SCHEME Figure 5 compares the training curves of ScheduleNet and its variants. We firstly show the effectiveness of the proposed sparse reward compared to the dense reward functions; distance reward and distance-utilization reward. The distance reward is defined as the negative distance between the current worker position and the assigned city. This reward function is often used for solving TSP(Dai et al., 2016). The distance-utilization is defined as distance reward over the number of active workers. This reward function aims to minimize the (sub) tour distances while maximizing the utilization of the workers. The proposed sparse reward is the only reward function that can train ScheduleNet stable and achieves the minimal gaps, also as shown in 5 [Left]. We also validate the effectiveness of Clipped REINFORCE compared to the actor-critic counterpart, PPO. We use the same network architecture of the Clipped REINFORCE model for the actor and critic of the PPO model. Counter to the common belief, The actor-critic method (PPO) is not superior to the actor-only method (Clipped REINFORCE) as shown in 5 [Right]. We hypothesize this phenomenon is because the training target of the critic (sampled makepsan) is highly volatile and multimodal as visualized in Figure 4 and the value prediction error would deteriorate the policy due to the bellman error propagation in actor-critic setup as discussed in Fujimoto et al. (2018). 7 CONCLUSION We proposed ScheduleNet for solving MinMax mTSP, the problem seeking to minimize the total completion time for multiple workers to complete the geographically distributed tasks. The use of type-aware graphs and the specially designed TGA graph node embedding allows the trained ScheduleNet policy to induce the coordinated strate- gic subroutes of the workers and to be well transferred to unseen mTSP with any numbers of workers and tasks. We have empirically shown that the proposed method achieves the performance comparable to Google OR-Tools, a highly optimized meta-heuristic baseline. All in all, this study has shown the potential that the proposed ScheduleNet can be effectively used to schedule multiple vehicles for solving large-scale, practical, real-world applications. A APPENDIX A.1 DETAILS OF MDP TRANSITION AND GRAPH FORMULATION Event based MDP transition The formulated semi-MDP for ScheduleNet is event-based. Thus, whenever all workers are assigned to cities, the environment transits in time, until any of the workers arrives to the city (i.e. completes the task). Arrival of the worker to the city is the event trigger, meanwhile the other assigned workers are still on the way to their correspondingly assigned cities. We assume that each worker transits towards the assigned city with unit speed in the 2D Euclidean space, i.e. the distance travelled by each worker equals the time past between two consecutive MDP events. Graph formulation In total our graph formulation includes seven mutually exclusive node type: (1) assigned-worker, (2) unassigned-worker, (3) inactive-worker, (4) assigned-city, (5) unassignedcity, (6) inactive-city, and (7) depot. Here, the set of active workers (cities) is defined by the union of assigned and unassigned workers (cities). Inactive-city node refers to the city that has been already visited, while the inactive-worker node refers to the worker that has finished its route and returned to the depot. A.2 DETAILS OF IMPLEMENTATION 2phase mTSP heuristics 2phase heuristics for mTSP is an extension of well-known TSP heuristics to the m > 1 cases. First, we perform K-means spatial clustering of cities in the mTSP instance, where K = m. Next, we apply TSP insertion heuristics (Nearest Insertion, Farthest Insertion, Random Insertion, and Nearest Neighbour Insertion) for each cluster of cities. It should be noted that, performance of the 2phase heuristics is highly depended on the spatial distribution of the cities on the map. Thus 2phase heuristics perform particularly well on uniformly distributed random instances, where K-means clustering can obtain clusters with approximately same number of cities per cluster. Proximal Policy Optimization Our implementation of PPO closely the standard implementation of PPO2 from stable-baselines (Hill et al., 2018) with default hyperparameters, with modifications to allow for distributed training with Parameter Server. A.3 COMPUTATION TIME Figure 6 shows the computation time curves as the function of number of cities (left), and number of workers (right). Overall, ScheduleNet is faster than OR-Tools, and the difference in computation speed only increases with the problem size. Additionally, ScheduleNet’s computation time depends only on the problem size (N + m), whereas the computation time of OR-Tools on both the size of the problem and the topology of the underlying mTSP instance. In other words, the number of solutions searched by OR-Tools vary depending on the underlying problem. Another computational and practical advantage of the ScheduleNet is its invariance to the number of workers. Computational complexity of ScheduleNet increases linearly with the number workers. On the other hand, the search space of meta-heuristic algorithms drastically increase with the number of workers, possibly, due to the exponentially increasing number Subtour Elimination Constraints (SEC). Particularly, we investigated that OR-Tools decreases the search space, by deactivating part of the workers, i.e. not utilizing all possible partial solutions (subtours). As a result, Figure 6 shows that computation time of the OR-Tools actually decrease due to deactivation part of workers, at the expense of the decreasing solution quality. A.4 DETAILS OF TYPE-AWARE GRAPH ATTENTION EMBEDDING In this section, we thoroughly describe a type-aware graph embedding procedure. Similar to the main body, We overload notations for the simplicity of notation such that the input node and edge feature as hi and hij , and the embedded node and edge feature h′i and h ′ ij , respectively. The proposed graph embedding step consists of three phases: (1) type-aware edge update, (2) typeaware message aggregation, and (3) type-aware node update. Type-aware Edge update The edge update scheme is designed to reflect the complex type relationship among the entities while updating edge features. First the context embedding cij of edge eij computed using the source node type kj such that: cij = MLPetype(kj) (11) where MLPetype is the edge type encoder. The source node types are embedded into the context embedding cij using MLPetype. Next, the type-aware edge encoding uij is computed using the Multiplicative Interaction (MI) layer (Jayakumar et al., 2019) as follows: uij = MIedge([hi;hj ;hij ], cij) (12) where MIedge is the edge MI layer. We utilize the MI layer, which dynamically generates its parameter depending on the context cij and produces “type-aware” edge encoding uij , to effectively model the complex type relationships among the nodes. “type-aware” edge encoding uij can be seen as a dynamic edge feature which varies depending on the source node type. After the updated edge embedding h′ij and its attention logit zij is obtained as: h′ij = MLPedge(uij) (13) zij = MLPattn(uij) (14) where MLPedge and MLPattn is the edge updater and logit function, respectively. the edge updater and logit function produces updated edge embdding and logits from the “type-aware” edge. The computation steps of equation 11, 12, and 13 are defined as TGAE. Similarly, the computation steps of equation 11, 12, and 14 are defined as TGAA. Message aggregation First, we define the type-k neighborhood of node vi such that Nk(i) = {vl|kl = k, ∀vl ∈ N (i)}, where N (i) is the in-neigborhood set of node i. i.e., The type-k neighborhood is the set of edges heading to node i, and their source nodes have type k. The proposed type-aware message aggregation procedure computes attention score αij for the eij , which starts from node j and heads to node i, such that: αij = exp(zij)∑ l∈Nkj (i) exp(zil) (15) Intuitively speaking, The proposed attention scheme normalizes the attention logits of incoming edges over the types. Therefore, the attention scores sum up to 1 over each type-k neighborhood. Next, the type-k neighborhood message mi,k for node vi is computed as: mki = ∑ j∈Nk(i) αijh ′ ij (16) In this aggregation step, the incoming messages of node i are aggregated type-wisely. Finally, all incoming type neighborhood messages are concatenated to produce (inter-type) aggregated message mi for node vi, such that: mi = concat({mki |k ∈ K}) (17) Node update Similar to the edge update phase, first, the context embedding ci is computed for each node vi: ci = MLPntype(ki) (18) where MLPntype is the node type encoder. Then, the updated hidden node embedding h′i is computed as below: h′i = MLPnode(hi, ui) (19) where ui = MInode(mi, ci) is the type-aware node embedding that is produced by MInode layer using aggregated messages mi and the context embedding ci. The computation steps of equation 18, and 19 are defined as TGAE. The overall computation procedure TGA is illustrated in Figure 7. A.5 DETAILS OF SCHEDULENET TRAINING A.5.1 TRAINING PSEUDO CODE In this section, we presents a pseudocode for training ScheduleNet. Algorithm 1: ScheduleNet Training Input: Training policy πθ Output: Smoothed policy πφ 1 Initialize smoothed policy with parameters φ← θ. 2 for update step do 3 Generate a random mTSP instance I 4 for number of episodes do 5 Construct mTSP MDP from the instance I 6 πb ← argmax(πθ) 7 Collect samples with πθ and πb from the mTSP MDP. 8 πθold ← πθ 9 for inner updates K do 10 θ ← θ + α∇θL(θ) 11 φ← βφ+ (1− β)θ A.5.2 HYPERPARAMETERS In this section, we fully explain hyperparameters of ScheduleNet. Network Architecture We use the same hyperparmeters for the raw-2-hid TGA layer and the hid2-hid TGA layer. MLPetype and MLPntype has one hidden layer with 32 neurons and their output dimensions are both 32. Both MI layers has 64 dimensional outputs. MLPedge, MLPattn, and MLPnode has 2 hidden layers with 32 neurons. MLPactor has 2 hidden layers and the hidden layers has 128 neurons each. We use ReLU activation functions for all hidden layers. The hidden graph embedding step H is two. Training We use the discount factor γ of 0.7. We use Adam (Kingma & Ba, 2014) with learning rate value of 0.001. We set the clipping parameter as 0.2. We sample 40 independent mTSP trajectories per gradient update. We clipped the maximum gradient whenever the norm of gradient is larger than 0.5. Inner updates steps K is three. The smoothing parameter β is 0.95. A.6 TRANSFERABILITY TEST ON TSP (m = 1) The trained ScheduleNet has been employed to solve random TSP instances. Because ScheduleNet can be used to schedule any m number of workers, if we set m = 1, it can be used to schedule TSP instance without further training. Table 3 shows the results on this transferability experiments. Table 3 shows that the trained ScheduleNet can solve reasonably well on random TSP instances, although ScheduleNet has never been exposed to such TSP instances. Note that as the size of TSP increases, the gap between the ScheduleNet and other models becomes smaller. If the ScheduleNet is trained with TSP instances with m = 1, the performance can be further improved. However, we did not try that experiment to check its transferability over different types of routing problems with different objectives. A.7 EXTENDED MTSPLIB RESULTS M et ahe ur is tic s H eu ri st ic s m C PL E X O R SN SO M A C O E A 2p ha se N I 2p ha se FI 2p ha se R I 2p ha se N N ei l5 1 2 22 2. 73 24 3. 02 25 9. 67 27 8. 44 24 8. 76 27 6. 62 27 1. 25 31 1. 26 26 5. 85 38 7. 20 3 15 9. 57 17 0. 05 17 2. 16 21 0. 25 18 0. 59 20 8. 16 20 2. 85 21 8. 71 19 5. 89 22 2. 13 5 12 3. 96 12 7. 5 11 8. 94 15 7. 68 13 5. 09 15 1. 21 18 3. 53 18 0. 21 15 0. 49 21 0. 75 7 11 2. 07 11 2. 07 11 2. 42 13 6. 84 11 9. 96 12 3. 88 12 9. 65 14 4. 11 12 7. 72 14 7. 81 be rl in 2 41 10 .2 1 46 65 .4 7 48 16 .3 53 50 .8 3 43 88 .9 9 50 38 .3 3 59 41 .0 3 66 05 .0 5 57 85 .0 0 65 19 .5 1 3 32 44 .3 7 33 11 .3 1 33 72 .1 4 41 97 .6 1 34 68 .9 38 65 .4 5 38 11 .4 9 40 37 .1 0 41 33 .8 5 35 81 .2 1 5 24 41 .3 9 24 82 .5 7 26 15 .5 7 34 61 .9 3 27 33 .5 6 28 53 .6 3 29 72 .5 7 40 37 .1 0 41 08 .5 8 35 81 .2 1 7 24 40 .9 2 24 40 .9 2 25 76 .0 4 31 25 .2 1 25 10 .0 9 25 43 .7 3 29 72 .5 7 30 33 .0 0 29 98 .2 0 31 98 .1 8 ei l7 6 2 28 0. 85 31 8 33 4. 1 36 4. 02 30 8. 53 36 5. 72 36 3. 21 40 3. 56 39 5. 15 37 3. 75 3 19 7. 34 21 2. 41 22 6. 54 27 8. 63 22 4. 56 28 5. 43 30 2. 1 27 9. 33 27 6. 58 35 7. 77 5 15 0. 3 14 3. 38 16 8. 03 21 0. 69 16 3. 93 21 1. 91 19 1. 41 20 4. 16 18 5. 77 19 7. 54 7 13 9. 62 12 8. 31 15 1. 31 18 3. 09 14 6. 88 17 7. 83 17 3. 81 17 2. 94 15 5. 54 16 1. 36 ra t9 9 2 72 8. 75 76 2. 19 78 9. 98 92 7. 36 76 7. 15 89 6. 72 91 6. 55 96 5. 94 89 0. 86 92 9. 82 3 58 7. 17 55 2. 09 57 9. 28 75 6. 08 62 0. 45 73 9. 43 80 2. 84 80 2. 88 84 3. 03 80 9. 90 5 46 9. 25 47 3. 66 50 2. 49 62 4. 38 52 5. 54 59 6. 87 66 8. 6 64 5. 91 67 5. 39 64 1. 89 7 44 3. 91 44 2. 47 47 1. 67 56 4. 14 49 2. 13 53 4. 91 55 4. 19 57 7. 00 56 5. 12 50 4. 71 ga p 1 1. 03 1. 08 1. 31 1. 09 1. 24 1. 30 1. 38 1. 31 1. 40
1. What is the main contribution of the paper regarding deep reinforcement learning for solving the minimum-makespan multiple Traveling Salesman Problem? 2. What are the strengths and weaknesses of the proposed approach, particularly in its engineering, training procedure, experimentation, motivation, and relation to other variants of TSP? 3. How many instances are considered for each (N, m) pair in Table 1, and what are the standard deviations for the reported values? 4. Why was CPLEX not run on the MTSP Uniform instances as was done in Table 2, and how can a direct comparison be made with Hu et al.'s results? 5. Can you provide more implementation details for SOM, ACO, and EA, and cite the respective papers? 6. How can OR-Tools' parameters be tuned on the same training set of instances that ScheduleNet is trained on, and what kind of running time results can be expected for ScheduleNet and OR-Tools? 7. Were ScheduleNet's hyperparameters tuned, and if so, how? 8. How does ScheduleNet compare to VRP learning approaches from the literature, such as Nazari et al.'s method? 9. Are there any minor issues or typos in the submission that should be addressed before resubmission?
Review
Review Summary of the paper: This paper proposes a deep reinforcement learning (DRL) approach for learning a solution strategy for the minimum-makespan multiple Traveling Salesman Problem (mTSP). The makespan mTSP is a challenging combinatorial optimization problem in which we are given the 2-dimensional locations of a set of customers that must be visited by a (much smaller) set of trucks. The trucks depart from the same depot, and must return to it after their tours. The minimum makespan version of mTSP asks for a set of such tours such that the length of the longest tour is minimized. This work is part of a recent interest in using machine learning to design algorithms for hard discrete optimization problems. A number of such methods have been proposed for the standard TSP problem, but the minimum-makespan mTSP brings a number of challenges: the makespan is a sparse reward signal in RL terms, in that it is realized only after a full solution has been constructed (at the end of the episode); unlike the TSP, the mTSP has multiple trucks to be managed at every iteration of a sequential constructive algorithm. The authors make two main contributions towards establishing a DRL approach to min-makespan mTSP: 1- They propose a specialized graph neural network architecture which combines known ingredients in a way that is suitable to the structure of the mTSP; 2- They modify the RL training algorithm to take into account the intricate discrete structure of the makespan objective, which stabilizes the training process. Experimentally, the proposed ScheduleNet method is trained on a single set of random instances with a small number of customers and trucks, then tested on similar and larger random instances, as well as some benchmark mTSP instances from the literature. Compared to some other learned and non-learned algorithms, ScheduleNet seems to be competitive. Strengths: 1- Interesting engineering of the graph network model and of the RL training procedure to take into account mTSP and makespan structure; 2- Generalization from very tiny instances to much larger ones (though not too large in an absolute sense). Weaknesses: 1- Experimental evaluation leaves many questions unanswered; 2- Motivation for tackling yet another variant of the TSP is not very strong, in that it is unclear that practitioners solving mTSP in practice would be interested in using the proposed method. No discussion of how the ideas presented here could extend to other variants of TSP or significantly improve performance on some class of instances that are of great interest to the community or an application domain. 3- Submission seems to have been rushed, with missing citations and weird captions in a couple places. Recommendation: Overall, I have to recommend a rejection, but I do think that the authors are on a good path towards a paper if they strengthen the motivation and experiments. I don't know if that will be possible within the ICLR rebuttal. Questions to the authors: 1- Table 1: how many instances are considered for each (N, m) pair here? Please provide standard deviations for the values provided here, without which it's hard to tell how stable the reported average makespan is. 2- Table 1: Why was CPLEX not run on the MTSP Uniform instances as was done in Table 2? This way you can compute the exact approximation ratio. 3- Table 1: Are the results reported for Hu et al. copied from that paper? If so, are you using the exact same set of graphs? If not, a direct comparison such as that claimed in Table 1 is not possible. 4- Table 2: The caption is hard to parse. The following does not make sense: "CPLEX results are reported as the average of the upper and lower bound." Instead, you should report the best solution found by CPLEX (i.e., the best upper bound at termination). 5- Table 2: Please define SOM, ACO and EA, and cite the respective papers as well as any additional implementation details. 6- OR-Tools: You should tune the parameters of OR-Tools (e.g., https://developers.google.com/optimization/routing/routing_options) on the same training set of instances that you train your model on. The tuning can be performed using some kind of grid search or more sophisticated tuning tools such as SMAC (https://github.com/automl/SMAC3). 7- Running time results: There is no mention whatsoever of the running time of ScheduleNet (in training, but more importantly at test time), and how it compares to OR-Tools and the other heuristics of Table 1-2. 8- ScheduleNet hyperparameters: You list the values but no mention of if/how they were tuned. 9- Relation to VRP: mTSP is a special case of VRP in which there are no worker (truck) capacities. Have you considered comparing ScheduleNet to VRP learning approaches from the literature, such as Nazari et al. (which you cite)? Minor: "In this study, we formulate (MinMax mTSP as a Markov" --> remove "(" Section 3.1, "Transition": I find this paragraph hard to parse. "is equals to the makespan" --> "is equal to the makespan" "since the following operations only for the given time step" --> "since the following operations only apply to the given time step" "instances whose number m of tasks and the number N of workers are sampled" --> shouldn't this be the opposite? "benchmark problems in mTSPLib (cite)," --> please add the appropriate citation: https://profs.info.uaic.ro/~mtsplib/ Appendix: "and produces and produces “type-aware”" Appendix: "The computation steps of equation 11, 12,and ??" Appendix: "hyperparameters of SchduleNet" --> "hyperparameters of ScheduleNet"
ICLR
Title ScheduleNet: Learn to Solve MinMax mTSP Using Reinforcement Learning with Delayed Reward Abstract There has been continuous effort to learn to solve famous CO problems such as Traveling Salesman Problem (TSP) and Vehicle Routing Problem (VRP) using reinforcement learning (RL). Although they have shown good optimality and computational efficiency, these approaches have been limited to scheduling a singlevehicle. MinMax mTSP, the focus of this study, is the problem seeking to minimize the total completion time for multiple workers to complete the geographically distributed tasks. Solving MinMax mTSP using RL raises significant challenges because one needs to train a distributed scheduling policy inducing the cooperative strategic routings using only the single delayed and sparse reward signal (makespan). In this study, we propose the ScheduleNet that can solve mTSP with any numbers of salesmen and cities. The ScheduleNet presents a state (partial solution to mTSP) as a set of graphs and employs type aware graph node embeddings for deriving the cooperative and transferable scheduling policy. Additionally, to effectively train the ScheduleNet with sparse and delayed reward (makespan), we propose an RL training scheme, Clipped REINFORCE with ”target net,” which significantly stabilizes the training and improves the generalization performance. We have empirically shown that the proposed method achieves the performance comparable to Google OR-Tools, a highly optimized meta-heuristic baseline. 1 INTRODUCTION There have been numerous approaches to solve combinatorial optimization (CO) problems using machine learning. Bengio et al. (2020) have categorized these approaches into demonstration and experience. In demonstration setting, supervised learning has been employed to mimic the behavior of the existing expert (e.g., exact solvers or heuristics). On the other hand, in the experience setting, typically, reinforcement learning (RL) has been employed to learn a parameterized policy that can solve newly given target problems without direct supervision. While the demonstration policy cannot outperform its guiding expert, RL-based policy can outperform the expert because it improves its policy using a reward signal. Concurrently, Mazyavkina et al. (2020) have further categorized the RL approaches into improvement and construction heuristics. An improvement heuristics start from the arbitrary (complete) solution of the CO problem and iteratively improve it with the learned policy until the improvement stops (Chen & Tian, 2019; Ahn et al., 2019). On the other hand, the construction heuristics start from the empty solution and incrementally extend the partial solution using a learned sequential decision-making policy until it becomes complete. There has been continuous effort to learn to solve famous CO problems such as Traveling Salesman Problem (TSP) and Vehicle Routing Problem (VRP) using RL-based construction heuristics (Bello et al., 2016; Kool et al., 2018; Khalil et al., 2017; Nazari et al., 2018). Although they have shown good optimality and computational efficiency performance, these approaches have been limited to only scheduling a single-vehicle. The multi-extensions of these routing problems, such as multiple TSP and multiple VRP, are underrepresented in the deep learning research community, even though they capture a broader set of the real-world problems and pose a more significant scientific challenge. The multiple traveling salesmen problem (mTSP) aims to determine a set of subroutes for each salesman, given m salesmen and N cities that need to be visited by one of the salesmen, and a depot where salesmen are initially located and to which they return. The objective of a mTSP is either minimizing the sum of subroute lengths (MinSum) or minimizing the length of the longest subroute (MinMax). In general, the MinMax objective is more practical, as one seeks to visit all cities as soon as possible (i.e., total completion time minimization). In contrast, the MinSum formulation, in general, leads to highly imbalanced solutions where one of the salesmen visits most of the cities, which results in longer total completion time (Lupoaie et al., 2019). In this study, we propose a learning-based decentralized and sequential decision-making algorithm for solving Minmax mTSP problem; the trained policy, which is a construction heuristic, can be employed to solve mTSP instances with any numbers of salesman and cities. Learning a transferable mTSP solver in a construction heuristic framework is significantly challenging comparing to its single-agent variants (TSP and CVRP) because (1) we need to use the state representation that is flexible enough to represent any arbitrary number of salesman and cities (2) we need to introduce the coordination among multiple agents to complete the geographically distributed tasks as quickly as possible using a sequential and decentralized decision making strategy and (3) we need to learn such decentralized cooperative policy using only a delayed and sparse reward signal, makespan, that is revealed only at the end of the episode. To tackle such a challenging task, we formulate mTSP as a semi-MDP and derive a decentralized decision making policy in a multi-agent reinforcement learning framework using only a sparse and delayed episodic reward signal. The major components of the proposed method and their importance are summarized as follows: • Decentralized cooperative decision-making strategy: Decentralization of scheduling policy is essential to ensure the learned policy can be employed to schedule any size of mTSP problems in a scalable manner; decentralized policy maps local observation of each idle salesman one of feasible individual action while joint policy maps the global state to the joint scheduling actions. • State representation using type-award graph attention (TGA): the proposed method represents a state (partial solution to mTSP) as a set of graphs, each of which captures specific relationships among works, cities, and a depot. The proposed method then employs TGA to compute the node embeddings for all nodes (salesman and cities), which are used to assign idle salesman to an unvisited city sequentially. • Training decentralized policy using a single delayed shared reward signal: Training decentralized cooperative strategy using a single sparse and delayed reward is extremely difficult in that we need to distribute credits of a single scalar reward (makespan) over the time and agents. To resolve this, we propose a stable MARL training scheme which significantly stabilizes the training and improves the generalization performance. We have empirically shown that the proposed method achieves the performance comparable to Google OR-Tools, a highly optimized meta-heuristic baseline. The proposed approach outperforms OR-Tools in many cases on in-training, out-of-training problem distributions, and real-world problem instances. We also verified that scheduleNet can provide an efficient routing service to customers. 2 RELATED WORK Construction RL approaches A seminal body of work focused on the construction approach in the RL setting for solving CO problems (Bello et al., 2016; Nazari et al., 2018; Kool et al., 2018; Khalil et al., 2017). These approaches utilize encoder-decoder architecture, that encodes the problem structure into a hidden embedding first, and then autoregressively decodes the complete solution. Bello et al. (2016) utilized LSTM (Hochreiter & Schmidhuber, 1997) based encoder and decode the complete solution (tour) using Pointer Network (Vinyals et al., 2015) scheme. Since the routing tasks are often represented as graphs, Nazari et al. (2018) proposed an attention based encoder, while using LSTM decoder. Recently, Kool et al. (2018) proposed to use Transformer-like architecture (Vaswani et al., 2017) to solve several variants of TSP and single-vehicle CVRP. On the contrary, Khalil et al. (2017) do not use encoder-decoder architecture, but a single graph embedding model, structure2vec (Dai et al., 2016), that embeds a partial solution of the TSP and outputs the next city in the (sub)tour. (Kang et al., 2019) has extended structure2vec to random graph and employed this random graph embedding to solve identical parallel machine scheduling problems, the problem seeking to minimize the makespan by scheduling multiple machines. Learned mTSP solvers The machine learning approaches for solving mTSP date back to Hopfield & Tank (1985). However, these approaches require per problem instance training. (Hopfield & Tank, 1985; Wacholder et al., 1989; Somhom et al., 1999). Among the recent learning methods, Kaempfer & Wolf (2018) encodes MinSum mTSP with a set-specialized variant of Transformer architecture that uses permutation invariant pooling layers. To obtain the feasible solution, they use a combination of the softassign method Gold & Rangarajan (1996) and a beam search. Their model is trained in a supervised setting using mTSP solutions obtained by Integer Linear Programming (ILP) solver. Hu et al. (2020) utilizes a GNN encoder and self-attention Vaswani et al. (2017) policy outputs a probability of assignment to each salesman per city. Once cities are assigned to specific salesmen, they use existing TSP solver, OR-Tools (Perron & Furnon), to obtain each worker’s subroutes. Their method shows impressive scalability in terms of the number of cities, as they present results for mTSP instances with 1000 cities and ten workers. However, the trained model is not scalable in terms of the number of workers and can only solve mTSP problems with a pre-specified, fixed number of workers. 3 PROBLEM FORMULATION We define the set of m salesmen indexed by VT = {1, 2, ...,m}, and the set of N cities indexed by VC = {m + 1, 2, ...,m + N}. Following mTSP conventions, we define the first city as the depot. We also define the 2D-coordinates of entities (salesmen, cities, and the depot) as pi. The objective of MinMax mTSP is to minimize the length of the longest subtour of salesmen, while subtours covers all cities and all subtours of salesmen end at the depot. For the clarity of explanation, we will refer to salesman as a workers, and cities as a tasks. 3.1 MDP FORMULATION FOR MINMAX MTSP In this paper, the objective is to construct an optimal solution with a construction RL approach. Thus, we cast the solution construction process of MinMax mTSP as a Markov decision process (MDP). The components of the proposed MDP are as follows. Transition The proposed MDP transits based on events. We define an event as the the case where any worker reaches its assigned city. We enumerate the event with the index τ for avoiding confusion from the elapsed time of the mTSP problem. t(τ) is a function that returns the time of event τ . In the proposed event-based transition setup, the state transitions coincide with the sequential expansion of the partial scheduling solution. State Each entity i has its own state siτ = ( piτ ,1 active τ ,1 assigned τ ) at the τ -th event. the coordinates piτ is time-dependent for workers and static for tasks and the depot. Indicator 1activeτ describes whether the entity is active or inactive In case of tasks, inactive indicates that the task is already visited; in case of worker, inactive means that worker returned to the depot. Similarly, 1assignedτ indicates whether worker is assigned to a task or not. We also define the environment state senvτ that contains the current time of the environment, and the sequence of tasks visited by each worker, i.e., partial solution of the mTSP. The state sτ of the MDP at the τ -th event becomes sτ = ( {siτ}m+Ni=1 , senvτ ) . The first state s0 corresponds to the empty solution of the given problem instance, i.e., no cities have been visited, and all salesmen are in the depot. The terminal state sT corresponds to a complete solution of the given mTSP instance, i.e., when every task has been visited, and every worker returned to the depot (See Figure 1). Action A scheduling action aτ is defined as the worker-to-task assignment, i.e. salesman has to visit the assigned city. Reward. We formulate the problem in a delayed reward setting. Specifically, the sparse reward function is defined as r(sτ ) = 0 for all non-terminal events, and r(sT) = t(T), where T is the index of the terminal state. In other words, a single reward signal, which is obtained only for the terminal state, is equals to the makespan of the problem instance. 4 SCHEDULENET Given the MDP formulation for MinMax mTSP, we propose ScheduleNet that can recommend a scheduling action aτ given the current state Gτ represented as a graph, i.e., πθ(aτ |Gτ ). The SchedulNet first presents a state (partial solution of mTSP) as a set of graphs, each of which captures specific relationships among workers, tasks, and a depot. Then ScheduleNet employs type-aware graph attention (TGA) to compute the node embeddings and use the computed node embeddings to determine the next assignment action (See figure 2). 4.1 WORKER-TASK GRAPH REPRESENTATION Whenever an event occurs and the global state sτ of the MDP is updated at τ , ScheduleNet constructs a directed complete graph Gτ = (V,E) out of sτ , where V = VT ∪ VC is the set of nodes and E is the set of edges. We drop the time iterator τ to simplify the notations since the following operations only for the given time step. The nodes and edges and their associated features are defined as: • vi denotes the node corresponding entity i in mTSP problem. The node feature xi for vi is equal to the state siτ of entity i. In addition, ki denote the type of node vi. For instance, if the entity i is worker and its 1activeτ = 1, then the ki becomes active-worker type. • eij denotes the edge between between source node vj and destination node vi, representing the relationships between the two. The edge feature wij is equal to the Euclidean distance between the two nodes. 4.2 TYPE-AWARE GRAPH ATTENTION EMBEDDING In this section, we describe a type-aware graph attention (TGA) embedding procedure. We denote hi and hij as the node and the edge embedding, respectively, at a given time step, and h′i and h ′ ij as the updated embedding by TGA embedding. A single iteration of TGA embedding consists of three phases: (1) edge update, (2) message aggregation, and (3) node update. Type-aware Edge update Given the node embeddings hi for vi ∈ V and the edge embeddings hij for eij ∈ E, ScheduleNet computes the updated edge embedding h′ij and the attention logit zij as: h′ij = TGAE([hi, hj , hij ], kj) zij = TGAA([hi, hj , hij ], kj) (1) where TGAE and TGAA are, respectively, the type-aware edge update function and the type-aware attention function, which are defined for the specific type kj of the source node vj . The updated edge feature h′ij can be thought of as the message from the source node vj to the destination node vi, and the attention logit zij will be used to compute the importance of this message. In computing the updated edge feature (message), TGAE and TGAA first compute the “type-aware” edge encoding uij , which can be seen as a dynamic edge feature varying depending on the source node type, to effectively model the complex type-aware relationships among the nodes. Using the computed “type-aware” edge encoding uij , these two functions then compute the updated edge feature and attention logit using a multiplicative interaction (MI) layer (Jayakumar et al., 2019). The use of MI layer significantly reduces the number of parameters to learn without discarding the expressibility of the embedding procedure. The detailed architecture for TGAE and TGAA are provided in Appendix A.4. Type-aware Message aggregation The distribution of the node types in the mTSP graphs is highly imbalanced, i.e., the number of task-specific node types is much larger than the worker specific ones. This imbalance is problematic, specifically, during the message aggregation of GNN, since permutation invariant aggregation functions are akin to ignore messages from few-but-important nodes in the graph. To alleviate such an issue, we propose the following type-aware message aggregation scheme. We first define the type k neighborhood of node vi as the set of the k typed source nodes that are connected to the destination node vi, i.e., Nk(i) = {vl|kl = k, ∀vl ∈ N (i)}, where N (i) is the in-neighborhood set of node vi containing the nodes that are connected to node vi with incomingedges. The node vi aggregates separately messages from the same type of source nodes. For example, the aggregated message mki from k-type source nodes is computed as: mki = ∑ j∈Nk(i) αijh ′ ij (2) where αij is the attention score computed using the attention logits computed before as: αij = exp(zij)∑ j∈Nk(i) exp(zij) (3) Finally, all aggregated messages per type are concatenated to produce the total aggregated message mi for node vi as mi = concat({mki |k ∈ K}) (4) Type-aware Node update The aggregated message mi for node vi is then used to compute the updated node embedding h ′ i using the type-aware graph node update function TGAV as: h′i = TGAV([hi,mi], ki) (5) 4.3 ASSIGNMENT PROBABILITY COMPUTATION ScheduleNet model consists of two type-aware graph embedding layers that utilize the embedding procedure explained in the section above. The first embedding layer raw-2-hid is used to encode initial node and edge features xi and wij of the (full) graph Gτ , to obtain initial hidden node and edge features h(0)i and h (0) ij , respectively. We define the target subgraph Gsτ as the subset of nodes and edges from the original (full) graph Gτ that only includes a target-worker (unassigned-worker) node and all unassigned-city nodes. The second embedding layer hid-2-hid embeds the target subgraph Gsτ , H times. In other words, a hidden node and edge embeddings h(0)i and h (0) ij are iteratively updated H times to obtain final hidden embeddings h(H)i and h (H) ij , respectively. The final hidden embeddings are then used to make decision regarding the worker-to-task assignment. Specifically, probability of assigning target worker i to task j is computed as yij = MLPactor(h (H) i ;h (H) j ;h (H) ij ) pij = softmax({yij}j∈ A(Gτ )) (6) where the h(H)i , and h (H) ij is the final hidden node, edge embeddings, respectively. In addition, A(Gτ ) denote the set of feasible actions defined as {vj |kj = “Unassigned-task”∀j ∈ V}. 5 TRAINING SCHEDULENET In this section, we describe the training scheme of the ScheduleNet. Firstly, we explain reward normalization scheme which is used to reduce the variance of the reward. Secondly, we introduce a stable RL training scheme which significantly stabilizes the training process. Makespan normalization As mentioned in Section 3.1, we use the makespan of mTSP as the only reward signal for training RL agent. We denote the makespan of given policy π as M(π). We observe that, the makespan M(π) is a highly volatile depending on the problem size (number of cities and salesmen), the topology of the map, and the policy. To reduce the variance of the reward, we propose the following normalization scheme: m(π, πb) = M(πb)−M(π) M(πb) (7) where π and πb is the evaluation and baseline policy, respectively. The normalized makespan m(π, πb) is similar to (Kool et al., 2018), but we additionally divide the performance difference by the makespan of the baseline policy, which further reduces the variance that is induced by the size of the mTSP instance. From the normalized terminal reward m(π, πb), we compute the normalized return as follows: Gτ (π, πb) := γ T−τm(π, πb) (8) where T is the index of the terminal state, and γ is the discount factor. The normalized return Gτ (π, πb) becomes smaller and converges to (near) zero as τ decreases. From the perspective of the RL agent, it allows to the agent to acknowledge neutrality of current policy compared to the baseline policy for the early phase of the MDP trajectory. It is natural since knowing the relative goodness of the policy is hard from the early phase of the MDP. Stable RL training It is well known that the solution quality of CO problems, including the makespan of mTSP, is extremely sensitive to the action selection, and it thus prevents the stable policy learning. To address this problem, we propose the clipped REINFORCE, a variant PPO without the learned value function. We empirically found that it is hard to train the value function1, thus we use normalized returns Gτ(πθ, πb) directly. Then, the objective of the clipped REINFORCE is given as follows: L(θ) = E πθ [ T∑ τ=0 [min(clip(ρτ , 1− , 1 + )Gτ(πθ, πb), ρτGτ(πθ, πb))] ] (9) 1Note that the value function is trained to predict the makespan of the state to serve as an advantage estimator. Due to the combinatorial nature of the mTSP, the target of value function, makespan, is highly volatile, which makes training value function hard. We further discuss this in the experiment section. where ρτ = πθ(aτ |Gτ ) πθold(aτ |Gτ ) (10) and (Gτ , aτ ) ∼ πθ is the state-action marginal following πθ, and πθold is the old policy. Training detail We used the greedy version of current policy as the baseline policy πb. After updating the policy πθ, we smooth the parameters of policy πθ with the Polyak average (Polyak & Juditsky, 1992) to further stabilize policy training. The pseudo code of training and network architecture is given in Appedix A.5.1. 6 EXPERIMENTS We train the ScheduleNet using mTSP instances whose number m of workers and the number N of tasks are sampled from m ∼ U(2, 4) and N ∼ U(10, 20), respectively. This trained ScheduleNet policy is then evaluated on the various dataset, including randomly generated uniform mTSP datasets, mTSPLib (mTS), and randomly generated uniform TSP dataset, TSPLib, and TSP (dai). See Appendix for further training details. 6.1 MTSP RESULTS Random mTSP results We firstly investigate the generalization performance of ScheduleNet on the randomly generated uniform maps with varying numbers of tasks and workers. We report the results of OR-Tools and 2Phase heuristics; 2Phase Nearest Insertion (NI), 2Phase Farthest Insertion (FI), 2Phase Random Insertion (RI), and 2Phase Nearest Neighbor (NN). The 2Phase heuristics construct sub-tours by (1) clustering cities with clustering algorithm, and (2) applying the TSP heuristics within the cluster. The details of implementation are provided in the appendix. Table 1 shows that ScheduleNet in overall produces a slightly longer makespan than OR-Tools even for the large-sized mTSP instances. As the complexity of the target mTSP instance increases, the gap between ScheduleNet and OR-Tools decreases, even showing the cases where ScheduleNet outperforms OR-Tools. To further clarify, ScheduleNet has potentials for winning the OR-Tools on small and large cases as shown in the figure 3. This result empirically proves that ScheduleNet, even trained with small-sized mTSP instances, can solve large scale problems well. Notably, on the large scale maps, 2-Phase heuristics show their general effectiveness due to the uniformity of the city positions. It naturally invokes us to consider more realistic problems as discussed in the following section. mTSPLib results The trained ScheduleNet is employed to solve the benchmark problems in mTSPLib, without additional training, to validate the generalization capability of ScheduleNet on unseen mTSP instances, where the problem structure can be completely different from the instances used during training. Table 2 compares the performance of the ScheduleNet to other baseline models, including CPLEX (optimal solution), OR-Tools, and other meta-heuristics (Lupoaie et al., 2019); self-organization Map (SOM), ant-colony Optimization (ACO), and evolutionary algorithm (EA). We report the best known upper-bound for CPLEX results whenever the optimal solution is not known. OR-Tools generally shows promising results. Interestingly, OR-Tools also discovers the solution even better than the known upper-bounds. (e.g., eil76-m=5,7, rat99-m=5) That is possible for the large cases the search space of the exact method, CLPEX, becomes easily prohibitively large. Our method shows the second-best performance following OR-tools. The winning heuristic methods, 2Phase-NI/RI, shows drastic performance degradation on mTSPLib maps. It is noteworthy that our method, even in the zero-shot setting, performs better than the meta-heuristic methods, which perform optimization to solve each benchmark problem. Computational times The mixed-integer linear programming (MILP) formulated mTSP problem becomes quickly intractable due to the exponential growth of search space, namely subtour elim- ination constraint (SEC), as the number of workers increases. The computational gain of (Meta) heuristics, including the proposed method and OR-Tools, originates from the effective heuristics that trims out possible tours. The computational times of ScheduleNet linearly increase as the number of worker m increases for the number for the fixed number of task N due to the MDP formulation of mTSP. On the contrary, it is found that the computation times of OR-Tools depend on m and N , and also graph topology. As a result, the ScheduleNet becomes faster than OR-Tools for large instances as shown in figure 6. 6.2 EFFECTIVENESS OF THE PROPOSED TRAINING SCHEME Figure 5 compares the training curves of ScheduleNet and its variants. We firstly show the effectiveness of the proposed sparse reward compared to the dense reward functions; distance reward and distance-utilization reward. The distance reward is defined as the negative distance between the current worker position and the assigned city. This reward function is often used for solving TSP(Dai et al., 2016). The distance-utilization is defined as distance reward over the number of active workers. This reward function aims to minimize the (sub) tour distances while maximizing the utilization of the workers. The proposed sparse reward is the only reward function that can train ScheduleNet stable and achieves the minimal gaps, also as shown in 5 [Left]. We also validate the effectiveness of Clipped REINFORCE compared to the actor-critic counterpart, PPO. We use the same network architecture of the Clipped REINFORCE model for the actor and critic of the PPO model. Counter to the common belief, The actor-critic method (PPO) is not superior to the actor-only method (Clipped REINFORCE) as shown in 5 [Right]. We hypothesize this phenomenon is because the training target of the critic (sampled makepsan) is highly volatile and multimodal as visualized in Figure 4 and the value prediction error would deteriorate the policy due to the bellman error propagation in actor-critic setup as discussed in Fujimoto et al. (2018). 7 CONCLUSION We proposed ScheduleNet for solving MinMax mTSP, the problem seeking to minimize the total completion time for multiple workers to complete the geographically distributed tasks. The use of type-aware graphs and the specially designed TGA graph node embedding allows the trained ScheduleNet policy to induce the coordinated strate- gic subroutes of the workers and to be well transferred to unseen mTSP with any numbers of workers and tasks. We have empirically shown that the proposed method achieves the performance comparable to Google OR-Tools, a highly optimized meta-heuristic baseline. All in all, this study has shown the potential that the proposed ScheduleNet can be effectively used to schedule multiple vehicles for solving large-scale, practical, real-world applications. A APPENDIX A.1 DETAILS OF MDP TRANSITION AND GRAPH FORMULATION Event based MDP transition The formulated semi-MDP for ScheduleNet is event-based. Thus, whenever all workers are assigned to cities, the environment transits in time, until any of the workers arrives to the city (i.e. completes the task). Arrival of the worker to the city is the event trigger, meanwhile the other assigned workers are still on the way to their correspondingly assigned cities. We assume that each worker transits towards the assigned city with unit speed in the 2D Euclidean space, i.e. the distance travelled by each worker equals the time past between two consecutive MDP events. Graph formulation In total our graph formulation includes seven mutually exclusive node type: (1) assigned-worker, (2) unassigned-worker, (3) inactive-worker, (4) assigned-city, (5) unassignedcity, (6) inactive-city, and (7) depot. Here, the set of active workers (cities) is defined by the union of assigned and unassigned workers (cities). Inactive-city node refers to the city that has been already visited, while the inactive-worker node refers to the worker that has finished its route and returned to the depot. A.2 DETAILS OF IMPLEMENTATION 2phase mTSP heuristics 2phase heuristics for mTSP is an extension of well-known TSP heuristics to the m > 1 cases. First, we perform K-means spatial clustering of cities in the mTSP instance, where K = m. Next, we apply TSP insertion heuristics (Nearest Insertion, Farthest Insertion, Random Insertion, and Nearest Neighbour Insertion) for each cluster of cities. It should be noted that, performance of the 2phase heuristics is highly depended on the spatial distribution of the cities on the map. Thus 2phase heuristics perform particularly well on uniformly distributed random instances, where K-means clustering can obtain clusters with approximately same number of cities per cluster. Proximal Policy Optimization Our implementation of PPO closely the standard implementation of PPO2 from stable-baselines (Hill et al., 2018) with default hyperparameters, with modifications to allow for distributed training with Parameter Server. A.3 COMPUTATION TIME Figure 6 shows the computation time curves as the function of number of cities (left), and number of workers (right). Overall, ScheduleNet is faster than OR-Tools, and the difference in computation speed only increases with the problem size. Additionally, ScheduleNet’s computation time depends only on the problem size (N + m), whereas the computation time of OR-Tools on both the size of the problem and the topology of the underlying mTSP instance. In other words, the number of solutions searched by OR-Tools vary depending on the underlying problem. Another computational and practical advantage of the ScheduleNet is its invariance to the number of workers. Computational complexity of ScheduleNet increases linearly with the number workers. On the other hand, the search space of meta-heuristic algorithms drastically increase with the number of workers, possibly, due to the exponentially increasing number Subtour Elimination Constraints (SEC). Particularly, we investigated that OR-Tools decreases the search space, by deactivating part of the workers, i.e. not utilizing all possible partial solutions (subtours). As a result, Figure 6 shows that computation time of the OR-Tools actually decrease due to deactivation part of workers, at the expense of the decreasing solution quality. A.4 DETAILS OF TYPE-AWARE GRAPH ATTENTION EMBEDDING In this section, we thoroughly describe a type-aware graph embedding procedure. Similar to the main body, We overload notations for the simplicity of notation such that the input node and edge feature as hi and hij , and the embedded node and edge feature h′i and h ′ ij , respectively. The proposed graph embedding step consists of three phases: (1) type-aware edge update, (2) typeaware message aggregation, and (3) type-aware node update. Type-aware Edge update The edge update scheme is designed to reflect the complex type relationship among the entities while updating edge features. First the context embedding cij of edge eij computed using the source node type kj such that: cij = MLPetype(kj) (11) where MLPetype is the edge type encoder. The source node types are embedded into the context embedding cij using MLPetype. Next, the type-aware edge encoding uij is computed using the Multiplicative Interaction (MI) layer (Jayakumar et al., 2019) as follows: uij = MIedge([hi;hj ;hij ], cij) (12) where MIedge is the edge MI layer. We utilize the MI layer, which dynamically generates its parameter depending on the context cij and produces “type-aware” edge encoding uij , to effectively model the complex type relationships among the nodes. “type-aware” edge encoding uij can be seen as a dynamic edge feature which varies depending on the source node type. After the updated edge embedding h′ij and its attention logit zij is obtained as: h′ij = MLPedge(uij) (13) zij = MLPattn(uij) (14) where MLPedge and MLPattn is the edge updater and logit function, respectively. the edge updater and logit function produces updated edge embdding and logits from the “type-aware” edge. The computation steps of equation 11, 12, and 13 are defined as TGAE. Similarly, the computation steps of equation 11, 12, and 14 are defined as TGAA. Message aggregation First, we define the type-k neighborhood of node vi such that Nk(i) = {vl|kl = k, ∀vl ∈ N (i)}, where N (i) is the in-neigborhood set of node i. i.e., The type-k neighborhood is the set of edges heading to node i, and their source nodes have type k. The proposed type-aware message aggregation procedure computes attention score αij for the eij , which starts from node j and heads to node i, such that: αij = exp(zij)∑ l∈Nkj (i) exp(zil) (15) Intuitively speaking, The proposed attention scheme normalizes the attention logits of incoming edges over the types. Therefore, the attention scores sum up to 1 over each type-k neighborhood. Next, the type-k neighborhood message mi,k for node vi is computed as: mki = ∑ j∈Nk(i) αijh ′ ij (16) In this aggregation step, the incoming messages of node i are aggregated type-wisely. Finally, all incoming type neighborhood messages are concatenated to produce (inter-type) aggregated message mi for node vi, such that: mi = concat({mki |k ∈ K}) (17) Node update Similar to the edge update phase, first, the context embedding ci is computed for each node vi: ci = MLPntype(ki) (18) where MLPntype is the node type encoder. Then, the updated hidden node embedding h′i is computed as below: h′i = MLPnode(hi, ui) (19) where ui = MInode(mi, ci) is the type-aware node embedding that is produced by MInode layer using aggregated messages mi and the context embedding ci. The computation steps of equation 18, and 19 are defined as TGAE. The overall computation procedure TGA is illustrated in Figure 7. A.5 DETAILS OF SCHEDULENET TRAINING A.5.1 TRAINING PSEUDO CODE In this section, we presents a pseudocode for training ScheduleNet. Algorithm 1: ScheduleNet Training Input: Training policy πθ Output: Smoothed policy πφ 1 Initialize smoothed policy with parameters φ← θ. 2 for update step do 3 Generate a random mTSP instance I 4 for number of episodes do 5 Construct mTSP MDP from the instance I 6 πb ← argmax(πθ) 7 Collect samples with πθ and πb from the mTSP MDP. 8 πθold ← πθ 9 for inner updates K do 10 θ ← θ + α∇θL(θ) 11 φ← βφ+ (1− β)θ A.5.2 HYPERPARAMETERS In this section, we fully explain hyperparameters of ScheduleNet. Network Architecture We use the same hyperparmeters for the raw-2-hid TGA layer and the hid2-hid TGA layer. MLPetype and MLPntype has one hidden layer with 32 neurons and their output dimensions are both 32. Both MI layers has 64 dimensional outputs. MLPedge, MLPattn, and MLPnode has 2 hidden layers with 32 neurons. MLPactor has 2 hidden layers and the hidden layers has 128 neurons each. We use ReLU activation functions for all hidden layers. The hidden graph embedding step H is two. Training We use the discount factor γ of 0.7. We use Adam (Kingma & Ba, 2014) with learning rate value of 0.001. We set the clipping parameter as 0.2. We sample 40 independent mTSP trajectories per gradient update. We clipped the maximum gradient whenever the norm of gradient is larger than 0.5. Inner updates steps K is three. The smoothing parameter β is 0.95. A.6 TRANSFERABILITY TEST ON TSP (m = 1) The trained ScheduleNet has been employed to solve random TSP instances. Because ScheduleNet can be used to schedule any m number of workers, if we set m = 1, it can be used to schedule TSP instance without further training. Table 3 shows the results on this transferability experiments. Table 3 shows that the trained ScheduleNet can solve reasonably well on random TSP instances, although ScheduleNet has never been exposed to such TSP instances. Note that as the size of TSP increases, the gap between the ScheduleNet and other models becomes smaller. If the ScheduleNet is trained with TSP instances with m = 1, the performance can be further improved. However, we did not try that experiment to check its transferability over different types of routing problems with different objectives. A.7 EXTENDED MTSPLIB RESULTS M et ahe ur is tic s H eu ri st ic s m C PL E X O R SN SO M A C O E A 2p ha se N I 2p ha se FI 2p ha se R I 2p ha se N N ei l5 1 2 22 2. 73 24 3. 02 25 9. 67 27 8. 44 24 8. 76 27 6. 62 27 1. 25 31 1. 26 26 5. 85 38 7. 20 3 15 9. 57 17 0. 05 17 2. 16 21 0. 25 18 0. 59 20 8. 16 20 2. 85 21 8. 71 19 5. 89 22 2. 13 5 12 3. 96 12 7. 5 11 8. 94 15 7. 68 13 5. 09 15 1. 21 18 3. 53 18 0. 21 15 0. 49 21 0. 75 7 11 2. 07 11 2. 07 11 2. 42 13 6. 84 11 9. 96 12 3. 88 12 9. 65 14 4. 11 12 7. 72 14 7. 81 be rl in 2 41 10 .2 1 46 65 .4 7 48 16 .3 53 50 .8 3 43 88 .9 9 50 38 .3 3 59 41 .0 3 66 05 .0 5 57 85 .0 0 65 19 .5 1 3 32 44 .3 7 33 11 .3 1 33 72 .1 4 41 97 .6 1 34 68 .9 38 65 .4 5 38 11 .4 9 40 37 .1 0 41 33 .8 5 35 81 .2 1 5 24 41 .3 9 24 82 .5 7 26 15 .5 7 34 61 .9 3 27 33 .5 6 28 53 .6 3 29 72 .5 7 40 37 .1 0 41 08 .5 8 35 81 .2 1 7 24 40 .9 2 24 40 .9 2 25 76 .0 4 31 25 .2 1 25 10 .0 9 25 43 .7 3 29 72 .5 7 30 33 .0 0 29 98 .2 0 31 98 .1 8 ei l7 6 2 28 0. 85 31 8 33 4. 1 36 4. 02 30 8. 53 36 5. 72 36 3. 21 40 3. 56 39 5. 15 37 3. 75 3 19 7. 34 21 2. 41 22 6. 54 27 8. 63 22 4. 56 28 5. 43 30 2. 1 27 9. 33 27 6. 58 35 7. 77 5 15 0. 3 14 3. 38 16 8. 03 21 0. 69 16 3. 93 21 1. 91 19 1. 41 20 4. 16 18 5. 77 19 7. 54 7 13 9. 62 12 8. 31 15 1. 31 18 3. 09 14 6. 88 17 7. 83 17 3. 81 17 2. 94 15 5. 54 16 1. 36 ra t9 9 2 72 8. 75 76 2. 19 78 9. 98 92 7. 36 76 7. 15 89 6. 72 91 6. 55 96 5. 94 89 0. 86 92 9. 82 3 58 7. 17 55 2. 09 57 9. 28 75 6. 08 62 0. 45 73 9. 43 80 2. 84 80 2. 88 84 3. 03 80 9. 90 5 46 9. 25 47 3. 66 50 2. 49 62 4. 38 52 5. 54 59 6. 87 66 8. 6 64 5. 91 67 5. 39 64 1. 89 7 44 3. 91 44 2. 47 47 1. 67 56 4. 14 49 2. 13 53 4. 91 55 4. 19 57 7. 00 56 5. 12 50 4. 71 ga p 1 1. 03 1. 08 1. 31 1. 09 1. 24 1. 30 1. 38 1. 31 1. 40
1. What is the main contribution of the paper regarding the min-max multiple TSP problem? 2. What are the strengths and weaknesses of the proposed approach, particularly in its numerical experiments and comparisons with other methods? 3. How does the reviewer assess the novelty and applicability of the method in solving real-world problems? 4. Are there any suggestions or recommendations for improving the paper's content or experimental design? 5. What are some minor questions or points that the reviewer raises regarding the paper's formulation, notation, or explanations?
Review
Review Summary The paper proposes a reinforcement learning approach to solve the min-max multiple TSP, where there are multiple salesmen and the goal is to minimize the longest subtour while every city is visited by one salesman. The authors propose an architecture, ScheduleNet, that encodes a partial solution or state and outputs a policy, i.e. a probability distribution over the actions. They train the model using a variant of the REINFORCE algorithm. The approach is validated on randomly generated mTSP instances as well as the standard literature benchmark TSPlib. Strong points The addressed problem, TSP with multiple salesmen is an important combinatorial problem that is more challenging than the standard TSP because of the multi-agent cooperation that it involves. It is true that although there is a lot of literature on learning-based approaches that solve the TSP, only a few very recent papers deal with the multiple agent setting The MDP formulation with the notion of events is sound and clearly explained. It is more sophisticated than the standard MDPs used in the “one agent” setting The type-aware embeddings are interesting here to differentiate the interactions between the different types of nodes. Weak points The numerical experiments are not convincing me that the approach would be useful in practice To be informative, results in Table 1 should be the average over a number of random instances for each characteristic. Maybe it’s already the case but it is not mentioned. Moreover, the random instances should also follow different distributions to be varied and really helpful to evaluate the method. Table 1 and 2: I found the reported gap (fraction of objectives) not so clear to get a precise sense of the performance. In Table 2, using the standard (approximate) optimiality gap would be better (obj_heuristic – obj_cplex)/obj_cplex. It would be informative to report CPLEX results for the randomly generated instances as well. Although the TSP is a natural special case of mTSP, the performance on of the approach on randomly generated TSP instances (cf Table 3) is significantly poorer than that of other learned heuristics. It would be useful to report the results for TSPlib as well. The authors claim that they propose a new approach for training “Clipped REINFORCE, a variant of clipped PPO without the learned value function”. It would be useful to give more explanations for this choice. In equation (9), I believe there is a missing sum over \tau. This does not help in understanding Recommendation I would vote for reject. In summary, the proposed approach is an adaptation of known techniques, to a specific interesting problem, that does not lead to a clear gain in performance. Arguments for recommendation The MDP framework and type-aware GNNs are interesting and new in this context but not novel To me, the numerical experiments are very limited and do not demonstrate the added-value of this method, see weak points above Questions to authors Sec 4.1: It is said that you consider the complete graph and that the edge features are the Euclidian distance which is symmetric. So what is the point of using a directed graph? Sec 4.1: “v_i denotes the node corresponding entity i in mTSP problem”. It sounds like v_i is a node of the graph. But if at \tau a worker is in between two cities, what would be v_i? Sec 4.1: what are the types exactly? You give an example “active-worker” but it would be useful to list them all. Sec 4: There is a confusion between source and destination indices. “eij denotes the edge between between source node vi and destination node vj” but then for the edge embedding “the specific type kj of the source node vj” and “the message from the source node vj to the destination node vi”. Similarly, equation (2), it is confusing to use j as a source index. Sec 4.2: the definition of Nk(i) = {vj |kj = k, ∀l ∈ N (i)} does not make sense. What is the correct one? Because the graph is complete, is N(i) different from the entire V? Sec 5: “\pi_b is the evaluation and baseline policy”. What baseline did you use? Sec 5: equation 8, can you explain the choice of the exponent of gamma? Sec 6: “m ∼ U(2, 4) and N ∼ U(10, 20)”. Are m and N switched here? Otherwise there would be more workers than cities. Table 2: what are SOM, ACO and EA? These baselines should be described (at least named) in the text. Feedback to help improve the paper “For the clarity of explanation, we will refer to salesman as a workers, and cities as a tasks.” I actually found it more confusing. Especially because task is standardly used to refer to the entire problem that the RL algorithm is addressing “We define the set of m salesmen VT = {1, 2, ..., m}, and the set of N cities VC = {m+1, 2, ..., m+ N}” -> set of m salesmen indexed by VT = {1, 2, ..., m}, and the set of N cities indexed by VC = {m+1, 2, ..., m+ N} “CPLEX results are reported as the average of the upper and lower bound”. It would make more sense to report the upper bound, i.e. the value of the best feasible solution found by the solver within the time limit. To be able to better generalize to the TSP instances, why not include instances with N=1 during training.
ICLR
Title AQUILA: Communication Efficient Federated Learning with Adaptive Quantization of Lazily-Aggregated Gradients Abstract The development and deployment of federated learning (FL) have been bottlenecked by heavy communication overheads of high-dimensional models between the distributed device nodes and the central server. To achieve better errorcommunication trade-offs, recent efforts have been made to either adaptively reduce the communication frequency by skipping unimportant updates, e.g., lazy aggregation, or adjust the quantization bits for each communication. In this paper, we propose a unifying communication efficient framework for FL based on adaptive quantization of lazily-aggregated gradients (AQUILA), which adaptively balances two mutually-dependent factors, the communication frequency, and the quantization level. Specifically, we start with a careful investigation of the classical lazy aggregation scheme and formulate AQUILA as an optimization problem where the optimal quantization level is selected by minimizing the model deviation caused by update skipping. Furthermore, we devise a new lazy aggregation strategy to better fit the novel quantization criterion and retain the communication frequency at an appropriate level. The effectiveness and convergence of the proposed AQUILA framework are theoretically verified. The experimental results demonstrate that AQUILA can reduce around 60% of overall transmitted bits compared to existing methods while achieving identical model performance in a number of non-homogeneous FL scenarios, including Non-IID data and heterogeneous model architecture. 1 INTRODUCTION With the deployment of ubiquitous sensing and computing devices, the Internet of things (IoT), as well as many other distributed systems, have gradually grown from concept to reality, bringing dramatic convenience to people’s daily life (Du et al., 2020; Liu et al., 2020; Hard et al., 2018). To fully utilize such distributed computing resources, distributed learning provides a promising framework that can achieve comparable performance with the traditional centralized learning scheme. However, the privacy and security of sensitive data during the updating and transmission processes in distributed learning have been a growing concern. In this context, federated learning (FL) (McMahan et al., 2017) has been developed, allowing distributed devices to collaboratively learn a global model without privacy leakage by keeping private data isolated and masking transmitted information with secure approaches. On account of its privacy-preserving property and great potentiality in some distributed but privacy-sensitive fields such as finance and health, FL has attracted tremendous attention from both academia and industry in recent years. Unfortunately, in many FL applications, such as image classification and objective recognition, the trained model tends to be high-dimensional, resulting in significant communication costs. Hence, communication efficiency has become one of the key bottlenecks of FL. To this end, Sun et al. (2020) proposes the lazily-aggregated quantization (LAQ) method to skip unnecessary parameter uploads by estimating the value of gradient innovation — the difference between the current unquantized gradient and the previously quantized gradient. Moreover, Mao et al. (2021) devises an adaptive quantized gradient (AQG) strategy based on LAQ to dynamically select the quantization level within some artificially given numbers during the training process. Nevertheless, the AQG is still not sufficiently adaptive because the pre-determined quantization levels are difficult to choose in complicated FL environments. In another separate line of work, Jhunjhunwala et al. (2021) introduces an adaptive quantization rule for FL (AdaQuantFL), which searches in a given range for an optimal quantization level and achieves a better error-communication trade-off. Most previous research has investigated optimizing communication frequency or adjusting the quantization level in a highly adaptive manner, but not both. Intuitively, we ask a question, can we adaptively adjust the quantization level in the lazy aggregation fashion to simultaneously reduce transmitted amounts and communication frequency? In the paper, we intend to select the optimal quantization level for every participated device by optimizing the model deviation caused by skipping quantized gradient updates (i.e., lazy aggregation), which gives us a novel quantization criterion cooperated with a new proposed lazy aggregation strategy to reduce overall communication costs further while still offering a convergence guarantee. The contributions of this paper are trifold. • We propose an innovative FL procedure with adaptive quantization of lazily-aggregated gradients termed AQUILA, which simultaneously adjusts the communication frequency and quantization level in a synergistic fashion. • Instead of naively combining LAQ and AdaQuantFL, AQUILA owns a completely different device selection method and quantitative level calculation method. Specifically, we derive an adaptive quantization strategy from a new perspective that minimizes the model deviation introduced by lazy aggregation. Subsequently, we present a new lazy aggregation criterion that is more precise and saves more device storage. Furthermore, we provide a convergence analysis of AQUILA under the generally non-convex case and the Polyak-Łojasiewicz condition. • Except for normal FL settings, such as independent and identically distributed (IID) data environment, we experimentally evaluate the performance of AQUILA in a number of non-homogeneous FL settings, such as non-independent and non-identically distributed (Non-IID) local dataset and various heterogeneous model aggregations. The evaluation results reveal that AQUILA considerably mitigates the communication overhead compared to a variety of state-of-art algorithms. 2 BACKGROUND AND RELATED WORK Consider an FL system with one central parameter server and a device set M with M = |M| distributed devices to collaboratively train a global model parameterized by θ ∈ Rd. Each device m ∈ M has a private local dataset Dm = {(x(m)1 ,y (m) 1 ), · · · , (x (m) nm ,y (m) nm )} of nm samples. The federated training process is typically performed by solving the following optimization problem min θ∈Rd f(θ) = 1 M M∑ m=1 fm(θ) with fm(θ) = [l (hθ(x),y)](x,y)∼Dm , (1) where f : Rd → R denotes the empirical risk, fm : Rd → R denotes the local objective based on the private data Dm of the device m, l denotes the local loss function, and hθ denotes the local model. The FL training process is conducted by iteratively performing local updates and global aggregation as proposed in (McMahan et al., 2017). First, at communication round k, each device m receives the global model θk from the parameter server and trains it with its local data Dm. Subsequently, it sends the local gradient ∇fm(θk) to the central server, and the server will update the global model with learning rate α by θk+1 := θk − α M ∑ m∈M ∇fm(θk). (2) Definition 2.1 (Quantized gradient innovation). For more efficiency, each device only uploads the quantized deflection between the full gradient ∇fm(θk) and the last quantization value qk−1m utilizing a quantization operator Q : Rd → Rd, i.e., ∆qkm = Q(∇fm(θ k)− qk−1m ). (3) For communication frequency reduction, the lazy aggregation strategy allows the device m ∈ M to upload its newly-quantized gradient innovation at epoch k only when the change in local gradient is sufficiently larger than a threshold. Hence, the quantization of the local gradient qkm of device m at epoch k can be calculated by qkm := qk−1m , if ∥∥∥Q(∇fm (θk)− qk−1m )∥∥∥2 2 ⩽ Threshold qk−1m +∆q k m, otherwise . (4) If the device m skips the upload of ∆qkm, the central server will reuse the last gradient q k−1 m for aggregation. Therefore, the global aggregation rule can be changed from (2) to: θk+1 = θk − α M ∑ m∈M qkm = θ k − α M ∑ m∈Mk ( qk−1m +∆q k m ) − α M ∑ m∈Mkc qk−1m , (5) where Mk denotes the subset of devices that upload their quantized gradient innovation, and Mkc = M \ Mk denotes the subset of devices that skip the gradient update and reuse the old quantized gradient at epoch k. For AdaQuantFL, it is proposed to achieve a better error-communication trade-off by adaptively adjusting the quantization levels during the FL training process. Specifically, AdaQuantFL computes the optimal quantization level (bk)∗ by (bk)∗ = ⌊ √ f(θ0)/f(θk) · b0⌋, where f(θ0) and f(θk) are the global objective loss defined in (1). However, AdaQuantFL transmits quantized gradients at every communication round. In order to skip unnecessary communication rounds and adaptively adjust the quantization level for each communication jointly, a naive approach is to quantize lazily aggregated gradients with AdaQuantFL. Nevertheless, it fails to achieve efficient communication for several reasons. First, given the descending trend of training loss, AdaQuantFL’s criterion may lead to a high quantization bit number even exceeding 32 bits in the training process (assuming a floating point is represented by 32 bits in our case), which is too large for cases where the global convergence is already approaching and makes the quantization meaningless. Second, a higher quantization level results in a smaller quantization error, leading to a lower communication threshold in the lazy aggregation criterion (4) and thus a higher transmission frequency. Consequently, it is desirable to develop a more efficient adaptive quantization method in the lazilyaggregated setting to improve communication efficiency in FL systematically. 3 ADAPTIVE QUANTIZATION OF LAZILY-AGGREGATED GRADIENTS Given the above limitations of the naive joint use of the existing adaptive quantization criterion and lazy aggregation strategy, this paper aims to design a unifying procedure for communication efficiency optimization where the quantization level and communication frequency are considered synergistically and interactively. 3.1 OPTIMAL QUANTIZATION LEVEL First, we introduce the definition of a deterministic rounding quantizer and a fully-aggregated model. Definition 3.1. (Deterministic mid-tread quantizer.) Every element of the gradient innovation of device m at epoch k is mapped to an integer [ψkm]i as[ ψkm ] i = [ ∇fm(θk) ] i − [ qk−1m ] i +Rkm 2τkmR k m + 1 2 ,∀i ∈ {1, 2, ..., d}, (6) where ∇f(θkm) denotes the current unquantized gradient, Rkm = ∥∇fm(θ k) − qk−1m ∥∞ denotes the quantization range, bkm denotes the quantization level, and τ k m := 1/(2 bkm − 1) denotes the quantization granularity. More explanations on this quantizer are exhibited on Appendix A.2. Definition 3.2 (Fully-aggregated model). The fully-aggregated model θ̃ without lazy aggregation at epoch k is computed by θ̃ k+1 = θk − α M ∑ m∈M ( qk−1m +∆q k m ) . (7) Lemma 3.1. The influence of lazy aggregation at communication round k can be bounded by ∥∥∥θ̃k−θk∥∥∥2 2 ⩽ 4α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)−qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2+4(Rkm)2d+ d2 ) . (8) Corresponding to Lemma 3.1, since Rkm is independent of τ k m, we can formulate an optimization problem to minimize the upper bound of this model deviation caused by update skipping for each device m: minimize 0<τkm⩽1 (∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 subject to τkm = 1( 2b k m − 1 ) . (9) Solving the below optimization problem gives AQUILA an adaptive strategy (10) that selects the optimal quantization level based on the quantization range Rkm, the dimension d of the local model, the current gradient ∇fm(θk), and the last uploaded quantized gradient qk−1m : (bkm) ∗ = log2 Rkm√d∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 + 1 . (10) The superiority of (10) comes from the following three aspects. First, since Rkm ⩾ [∇fm(θ k)]i − [qk−1m ]i ⩾ −Rkm, the optimal quantization level (bkm)∗ must be greater than or equal to 1. Second, AQUILA can personalize an optimal quantization level for each device corresponding to its own gradient, whereas, in AdaQuantFL, each device merely utilizes an identical quantization level according to the global loss. Third, the gradient innovation and quantization range Rkm tend to fluctuate along with the training process instead of keeping descending, and thus prevent the quantization level from increasing tremendously compared with AdaQuantFL. 3.2 PRECISE LAZY AGGREGATION CRITERION Definition 3.3 (Quantization error). The global quantization error εk is defined by the subtraction between the current unquantized gradient ∇f(θk) and its quantized value qk−1 +∆qk, i.e., εk = ∇f(θk)− qk−1 −∆qk, (11) where ∇f(θk) = ∑ m∈M ∇fm(θ k), qk−1 = ∑ m∈M q k−1 m ,∆q k = ∑ m∈M ∆q k m. To better fit the larger quantization errors induced by fewer quantization bits in (10), AQUILA possesses a new communication criterion to avoid the potential expansion of the devices group being skipped: ∥∥∆qkm∥∥22 + ∥∥εkm∥∥22 ⩽ βα2 ∥∥∥θk − θk−1∥∥∥22 ,∀m ∈ Mkc , (12) where β ⩾ 0 is a tuning factor. Note that this skipping rule is employed at epoch k, in which each device m calculates its quantized gradient innovation ∆qkm and quantization error ε k m, then utilizes this rule to decide whether uploads ∆qkm. The comparison of AQUILA’s skip rule and LAQ’s is also shown in Appendix A.2. Instead of storing a large number of previous model parameters as LAQ, the strength of (12) is that AQUILA directly utilizes the global model for two adjacent rounds as the skip condition, which does not need to estimate the global gradient (more precise), requires fewer hyperparameters to adjust, and considerably reduces the storage pressure of local devices. This is especially important for small-capacity devices (e.g., sensors) in practical IoT scenarios. Algorithm 1 Communication Efficient FL with AQUILA Input: the number of communication rounds K, the learning rate α. Initialize: the initial global model parameter θ0. 1: Server broadcasts θ0 to all devices. ▷ For the initial round k = 0. 2: for each device m ∈ M in parallel do 3: Calculates local gradient ∇fm(θ0). 4: Compute (b0m) ∗ by setting qk−1m = 0 in (10) and the quantized gradient innovation ∆q 0 m, and transmits it back to the server side. 5: end for 6: for k = 1, 2, ...,K do 7: Server broadcasts θk to all devices. 8: for each device m ∈ M in parallel do 9: Calculates local gradient ∇fm(θk), the optimal local quantization level (bkm)∗ by (10), and the quantized gradient innovation ∆qkm. 10: if (12) does not hold for device m then ▷ If satisfies, skip uploading. 11: device m transmits ∆qkm to the server. 12: end if 13: end for 14: Server updates θk+1 by the saving previous global quantized gradient qk−1m and the received quantized gradient innovation ∆qkm: θ k+1 := θk − α ( qk−1 + 1/M ∑ m∈Mk ∆q k m ) . 15: Server saves the average quantized gradient qk for the next aggregation. 16: end for The detailed process of AQUILA is comprehensively summarized in Algorithm 1. At epoch k = 0, each device calculates b0m by setting q k−1 0 = 0 and uploads ∆q k 0 to the server since the (12) is not satisfied. At epoch k ∈ {1, 2, ...,K}, the server first broadcasts the global model θk to all devices. Each device m computes ∇f(θkm) with local training data and then utilizes it to calculate an optimal quantization level by (10). Subsequently, each device computes its gradient innovation after quantization and determines whether or not to upload based on the communication criterion (12). Finally, the server updates the new global model θk+1 with up-to-date quantized gradients qk−1m +∆q k m for those devices who transmit the uploads at epoch k, while reusing the old quantized gradients qk−1m for those who skip the uploads. 4 THEORETICAL DERIVATION AND ANALYSIS OF AQUILA As aforementioned, we bound the model deviation caused by skipping updates with respect to quantization bits. Specifically, if the communication criterion (12) holds for the device m at epoch k, it does not contribute to epoch k’s gradient. Otherwise, the loss caused by device m will be minimized with the optimal quantization level selection criterion (10). In this section, the theoretical convergence derivation of AQUILA is based on the following standard assumptions. Assumption 4.1 (L-smoothness). Each local objective function fm is Lm-smooth, i.e., there exist a constant Lm > 0, such that ∀x,y ∈ Rd, ∥∇fm(x)−∇fm(y)∥2 ⩽ Lm ∥x− y∥2 , (13) which implies that the global objective function f is L-smooth with L ≤ L̄ = 1m ∑m i=1 Lm. Assumption 4.2 (Uniform lower bound). For all x ∈ Rd, there exist f∗ ∈ R such that f(x) ≥ f∗. Lemma 4.1. Following the assumption that the function f is L-smooth, we have f(θk+1)−f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 +α ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ∥∥εk∥∥2 2 +(L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 . (14) 4.1 CONVERGENCE ANALYSIS FOR GENERALLY NON-CONVEX CASE. Theorem 4.1. Suppose Assumptions 4.1, 4.2, and B.1 (29) be satisfied. If Mkc ̸= ∅, the global objective function f satisfies f(θk+1)−f(θk)⩽−α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1−θk∥∥∥2 2 + βγ α ∥∥∥θk−θk−1∥∥∥2 2 . (15) Corollary 4.1. Let all the assumptions of Theorem 4.1 hold and L2 − 1 2α + βγ α ⩽ 0, then the AQUILA requires K = O ( 2ω1 αϵ2 ) (16) communication rounds with ω1=f ( θ1 ) −f (θ∗)+βγα ∥∥θ1−θ0∥∥2 2 to achieve mink ∥∇f(θk)∥22 ⩽ ϵ2. Compared to LAG. Corresponding to Eq.(70) in Chen et al. (2018), LAG defines a Lyapunov function Vk := f(θk)− f(θ∗) + ∑D d=1 βd∥θ k+1−d − θk−d∥22 and claims that it satisfies Vk+1 − Vk ≤ − (α 2 − c̃ (α, β1) (1 + ρ)α2 )∥∥∥∇f(θk)∥∥∥2 2 , (17) where c̃ (α, β1) = L/2 − 1/(2α) + β1, β1 = Dξ/(2αη), ξ < 1/D, and ρ > 0. The above result (17) indicates that LAG requires KLAG = O ( 2ω1 (α− 2c̃ (α, β1) (1 + ρ)α2) ϵ2 ) (18) communication rounds to converge. Since the non-negativity of the term c̃ (α, β1) (1 + ρ)α2, we can readily derive that α < α − 2c̃ (α, β1) (1 + ρ)α2, which demonstrates AQUILA achieves a better convergence rate than LAG with the appropriate selection of α. 4.2 CONVERGENCE ANALYSIS UNDER POLYAK-ŁOJASIEWICZ CONDITION. Assumption 4.3 (µ−PŁ condition). Function f satisfies the PL condition with a constant µ > 0, that is, ∥∥∥∇f(θk)∥∥∥2 2 ⩾ 2µ(f(θk)− f(θ∗)). (19) Theorem 4.2. Suppose Assumptions 4.1, 4.2, and 4.3 be satisfied and Mkc ̸= ∅, if the hyperparameters satisfy βγα ⩽ (1− αµ) ( 1 2α − L 2 ) , then the global objective function satisfies f(θk+1)−f(θk)⩽−αµ(f(θk)−f(θ∗))+ ( L 2 − 1 2α )∥∥∥θk+1−θk∥∥∥2 2 + βγ α ∥∥∥θk−θk−1∥∥∥2 2 , (20) and the AQUILA requires K = O ( − 1 log(1− αµ) log ω1 ϵ ) (21) communication round with ω1 = f(θ 1) − f(θ∗) + ( 1 2α − L 2 ) ∥∥θ1 − θ0∥∥2 2 to achieve f(θK+1) − f(θ∗) + ( 12α − L 2 )∥θ K+1 − θK∥22 ⩽ ϵ. Compared to LAG. According to Eq.(50) in Chen et al. (2018), we have that VK ≤ ( 1− αµ+ αµ √ Dξ )K V0, (22) where ξ < 1/D. Thus, we have that LAG requires KLAG = O ( − 1 log(1− αµ+ αµ √ Dξ) log ω1 ϵ ) (23) communication rounds to converge. Compared to Theorem 4.2, we can derive that log(1− αµ) < log(1− αµ+ αµ √ Dξ), which indicates that AQUILA has a faster convergence than LAG under the PŁ condition. Remark. We want to emphasize that LAQ introduces the Lyapunov function into its proof, making it extremely complicated. In addition, LAQ can only guarantee that the final objective function converges to a range of the optimal solution rather than an accurate optimum f(θ∗). Nevertheless, as discussed in Section 3.2, we utilize the precise model difference in AQUILA as a surrogate for the global gradient and thus simplify the proof. 5 EXPERIMENTS AND DISCUSSION 5.1 EXPERIMENT SETUP In this paper, we evaluate AQUILA on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and WikiText-2 dataset (Merity et al., 2016), considering IID, Non-IID data scenario, and heterogeneous model architecture (which is also a crucial challenge in FL) simultaneously. The FL environment is simulated in Python 3.9 with PyTorch 11.1 (Paszke et al., 2019) implementation. For the diversity of the neural network structures, we train ResNet-18 (He et al., 2016) at CIFAR-10 dataset, MobileNet-v2 (Sandler et al., 2018) at CIFAR-100 dataset, and Transformer (Vaswani et al., 2017) at WikiText-2 dataset. As for the FL system setting, in the majority of our experiments, the whole system exists M = 10 total devices. However, considering the large-scale feature of FL, we also validate AQUILA on a larger system of M = 100/80 total devices for CIFAR / WikiText-2 dataset. The hyperparameters and additional details of our experiments are revealed in Appendix A.3. 5.2 HOMOGENEOUS ENVIRONMENT We first evaluate AQUILA with homogeneous settings where all local models share the same model architecture as the global model. To better demonstrate the effectiveness of AQUILA, its performance is compared with several state-of-the-art methods, including AdaQuantFL, LAQ with fixed levels, LENA (Ghadikolaei et al., 2021), MARINA (Gorbunov et al., 2021), and the naive combination of AdaQuantFL with LAQ. Note that based on this homogeneous setting, we conduct both IID and NonIID evaluations on CIFAR-10 and CIFAR-100 dataset, and an IID evaluation on WikiText-2. To simulate the Non-IID FL setting as (Diao et al., 2020), each device is allocated two classes of data in CIFAR-10 and 10 classes of data in CIFAR-100 at most, and the amount of data for each label is balanced. The experimental results are presented in Fig. 1, where 100% implies all local models share a similar structure with the global model (i.e., homogeneity), 100% (80 devices) denotes the experiment is conducted in an 80 devices system, and LAdaQ represents the naive combination of AdaQuantFL and LAQ. For better illustration, the results have been smoothed by their standard deviation. The solid lines represent values after smoothing, and transparent shades of the same colors around them represent the true values. Additionally, Table 2 shows the total number of bits transmitted by all devices throughout the FL training process. The comprehensive experimental results are established in Appendix A.4. 5.3 NON-HOMOGENEOUS SCENARIO In this section, we also evaluate AQUILA with heterogeneous model structures as HeteroFL (Diao et al., 2020), where the structures of local models trained on the device side are heterogeneous. Suppose the global model at epoch k is θk and its size is d = wg ∗ hg, then the local model of each device m can be selected by θkm = θ k [: wm, : hm], where wm = rmwg and hm = rmhg, respectively. In this paper, we choose model complexity levels rm = 0.5. (f)(e)(d) (b)(a) (c) Most of the symbols in Fig. 2 are identical to the Fig. 1. 100%-50% is a newly introduced symbol that implies half of the devices share the same structure with the global model while another half only have 50% * 50% parameters as the global model. Performance Analysis. First of all, AQUILA achieves a significant transmission reduction compared to the naive combination of LAQ and AdaQuantFL in all datasets, which demonstrates the superiority of AQUILA’s efficiency. Specifically, Table 2 indicates that AQUILA saves 57.49% of transmitted bits in the system of 80 devices at the WikiText-2 dataset and reduces 23.08% of transmitted bits in the system of 100 devices at the CIFAR-100 dataset, compared to the naive combination. And other results in Table 3 also show an obvious reduction in terms of the total transmitted bits required for convergence. Second, in Fig. 1 and Fig. 2, the changing trend of AQUILA’s communication bits per each round clearly verifies the necessity and effectiveness of our well-designed adaptive quantization level and skip criterion. In these two figures, the number of bits transmitted in each round of AQUILA fluctuates a bit, indicating the effectiveness of AQUILA’s selection rule. Meanwhile, the value of transmitted bits remains at quite a low level, suggesting that the adaptive quantization principle makes training more efficient. Moreover, the figures also inform that the quantization level selected by AQUILA will not continuously increase during training instead of being as AdaQuantFL. In addition, based on these two figures, we can also conclude that AQUILA converges faster under the same communication costs. Finally, AQUILA is capable of adapting to a wide range of challenging FL circumstances. In the Non-IID scenario and heterogeneous model structure, AQUILA still outperforms other algorithms by significantly reducing overall transmitted bits while maintaining the same convergence property and objective function value. In particular, AQUILA reduces 60.4% overall communication costs compared to LENA and 57.2% compared to MARINA on average. These experimental results in non-homogeneous FL settings prove that AQUILA can be stably employed in more general and complicated FL scenarios. 5.4 ABLATION STUDY ON THE IMPACT OF TUNING FACTOR β One key contribution of AQUILA is presenting a new lazy aggregation criterion (12) to reduce communication frequency. In this part, we evaluate the effects of the loss performance of different tuning factor β value in Fig. 3. As β grows within a certain range, the convergence speed of the model will slow down (due to lazy aggregation). Still, it will eventually converge to the same model performance while considerably reducing the communication overhead. Nevertheless, increasing the value of β will lead to a decrease in the final model performance since it skips so many essential uploads that make the training deficient. The accuracy (perplexity) comparison of AQUILA with various selections of the tuning factor β is shown in Fig. 10, which indicates the same trend.To sum up, we should choose the value of factor β to maintain the model’s performance and minimize the total transmitted amount of bits. Specifically, we select the value of β = 0.1, 0.25, 1.25 for CIFAR-10, CIFAR-100, and WikiText-2 datasets for our evaluation, respectively. (f)(e)(d) (b)(a) (c) 6 CONCLUSIONS AND FUTURE WORK This paper proposes a communication-efficient FL procedure to simultaneously adjust two mutuallydependent degrees of freedom: communication frequency and quantization level. With the close cooperation of the novel adaptive quantization and adjusted lazy aggregation strategy derived in this paper, the proposed AQUILA has been proven to be capable of reducing the transmitted costs while maintaining the convergence guarantee and model performance compared to existing methods. The evaluation with Non-IID data distribution and various heterogeneous model architectures demonstrates that AQUILA is compatible in a non-homogeneous FL environment. REPRODUCIBILITY We present the overall theorem statements and proofs for our main results in the Appendix, as well as necessary experimental plotting figures. Furthermore, we submit the code of AQUILA in the supplementary material part, including all the hyperparameters and a requirements to help the public reproduce our experimental results. Our algorithm is straightforward, well-described, and easy to implement. ETHICS STATEMENT All evaluations of AQUILA are performed on publicly available datasets for reproducibility purposes. This paper empirically studies the performance of various state-of-art algorithms, therefore, probably introduces no new ethical or cultural problems. This paper does not utilize any new dataset. A APPENDIX The appendix includes supplementary experimental results, mathematical proof of the aforementioned theorems, and a detailed derivation of the novel adaptive quantization criterion and lazy aggregation strategy. Compared to Fig. 1 and Fig. 2 in the main text, the result figures in the appendix show a more comprehensive evaluation with AQUILA, which contains more detailed information including but not limited to accuracy vs steps and training loss vs steps curves. A.1 OVERALL FRAMEWORK OF AQUILA The cooperation of the novel adaptive quantization criterion (10) and lazy aggregation strategy (12) is illustrated in Fig. 4a. Compared to the naive combination of AdaQuantFL and LAQ, where the mutual influence between adaptive quantization and lazy aggregation has not been considered, as shown in Fig. 4b, AQUILA adaptively optimizes the allocation of quantization bits throughout training to promote the convergence of lazy aggregation, and at the same time utilizes the lazy aggregation strategy to improve the efficiency of adaptive quantization by compress the transmission with a lower quantization level. A.2 EXPLANATION OF THE QUANTIZER AND THE SKIP RULE OF LAQ’S The quantizer (6) is a deterministic quantizer that, at each dimension, maps the gradient innovation to the closest point at a one-dimensional grid. The range of the grid is Rkm, and the granularity is determined by quantization level τkm. Each dimension of gradient innovation is mapped to an integer in {0, 1, 2, 3, . . . , 2b − 1}. More precisely, the 1/2 ensures mapping to the closest integer instead of flooring to a smaller integer. The Rkm in the numerator ensures that the mapped integer is non-negative. As a result, when the gradient innovation is transmitted to the central server, 32 bits are used for the range, and b ∗ d bits are used for the mapped integer. Thus, 32+ b ∗ d bits are transmitted in total. The difference between (6) and (32) (Lemma B.2) is that (6) encodes the raw gradient innovation vector to an integer vector, whilst (32) decodes the integer vector to a quantized gradient innovation vector. Specifically, in the training process, each client utilizes (6) to encode the gradient innovation to an integer at each dimension, and afterwards, the integer vector ψkm and τ k m are sent to a central server. After receiving them, the central server can decode the quantized gradient innovation as (32) states. The skip rule of LAQ is measured by the summation of the accumulated model difference and quantization error: ∥∆qkm∥22 ⩽ 1 α2M2 D∑ d′=1 ξd′ ∥∥∥θk+1−d′ − θk−d′∥∥∥2 2 + 3 (∥∥εkm∥∥22 + ∥∥∥ε̂k−1m ∥∥∥22 ) , (24) where ξd′ is a series of manually selected scalars and D is also predetermined. εkm is the quantization error of client m at epoch k, and ε̂k−1m is the quantization of client m at last time it uploads its gradient innovation. Please refer to Sun et al. (2020) for more details on (24). In order to compute the LAQ skip threshold, each client has to store enormous previous information. The difference of AQUILA skipping criterion and LAQ skipping criterion is as follows. First, the AQUILA threshold is easier to compute for a local client. Compared to the LAQ skipping criterion, AQUILA skipping criterion is more concise and thus requires less storage and computing power. Second, the AQUILA criterion is easier to tune because much fewer hyperparameters are introduced. Compared to the LAQ criterion in which α, D and {ξd′}Dd′=1 are all manually selected, whilst only two hyperparameters α and β are introduced in the AQUILA criterion. Third, with the given threshold, AQUILA has a good theoretical property. The theoretical analysis of AQUILA is easier to follow with no Lyapunov function introduced as in LAQ. And the result also shows that AQUILA can achieve a better convergence rate under the non-convex case and the PL condition. A.3 EXPERIMENT SETUP In this section, we provide some extra hyperparameter settings for our evaluation. For the LAQ, we set D = 10 and ξ1 = ξ2 = · · · = ξD = 0.8/D as the same as the setting in their paper. For LENA, we set βLENA = 40 in their trigger condition. And for MARINA, we calculate the uploading probability of Bernoulli distribution as p = ξQ/d as announced in their paper. In addition, we choose the CrossEntropy function as our objective function in the experiment part. Table 1 shows the hyperparameter details of our evaluation. A.4 COMPREHENSIVE EXPERIMENT RESULTS This section will cover all the experimental results in our paper. B BASIC FACTS AND SOME LEMMAS Notations: Bold fonts denote vectors (e.g., θ). Normal fonts denote scalars (e.g., α). Subscript m is used to describe functions about a local device m (e.g., fm(θ)). A function without a subscript is used to describe an average among all devices (e.g., f(θ)). Frequently used norm inequalities Suppose n ∈ N+ and ∥ · ∥2 denotes the ℓ2−norm. For p in R+,xi,a, b ∈ Rd, there holds 1. Norm summation inequality. ∥∥∥∥∥ n∑ i=1 xi ∥∥∥∥∥ 2 2 ⩽ n n∑ i=1 ∥xi∥22 . (25) 2. Inner-product inequality. ⟨a, b⟩ = 1 2 ( ∥a∥22 + ∥b∥22 − ∥a− b∥22 ) . (26) 3. Young’s Inequality. ∥a+ b∥22 ⩽ (1 + p) ∥a∥ 2 2 + (1 + p −1) ∥b∥22 . (27) 4. Minkowski’s Inequality. ∥a+ b∥2 ⩽ ∥a∥2 + ∥b∥2 . (28) Assumption B.1. All devices’ quantization errors εk will be constrained by the total error of the omitted devices., i.e., ∀ k = 0, 1, · · · ,K, if Mkc ̸= ∅, ∃ γ ⩾ 1, such that ∥∥εk∥∥2 2 = ∥∥∥∥∥ 1M ∑ m∈M εkm ∥∥∥∥∥ 2 2 ⩽ γ M2 ∥∥∥∥∥∥ ∑ m∈Mkc εkm ∥∥∥∥∥∥ 2 2 , (29) where K denotes the termination time, and εkm = ∇fm(θ k) − ( qk−1m +∆q k m ) . This lemma is easy to verify when Mkc ̸= ∅, a bounded variable (here is εk) will always be bounded by a part of itself ( 1M ∑ m∈Mkc εkm) multiplied by a real number (γ). Note that there is another nontrivial scenario that Mkc ̸= ∅ but εkm = 0 for all m ∈ Mkc , which implies that γ = 0 or not exists and conflicts with our assumption. However, this situation only happens when all entries of εkm = 0, i.e., [∇fm(θk)]i = [qk−1m ]i for all 0 ⩽ i ⩽ d. Lemma B.1. The summation of quantized gradient innovation and quantization error is bounded by the global model difference: ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ∥∥εk∥∥2 2 ⩽ βγ α2 ∥∥∥θk − θk−1∥∥∥2 2 , (30) Proof. ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ∥∥εk∥∥2 2 (a) ⩽ ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + γ ∥∥∥∥∥∥ 1M ∑ m∈Mkc εkm ∥∥∥∥∥∥ 2 2 (25) ⩽ |Mkc | ∑ m∈Mkc ∥∥∥∥ 1M∆qkm ∥∥∥∥2 2 + γ|Mkc | ∑ m∈Mkc ∥∥∥∥ 1M εkm ∥∥∥∥2 2 = |Mkc | M2 ∑ m∈Mkc (∥∥∆qkm∥∥22 + γ ∥∥εkm∥∥22) (b) ⩽ |Mkc | M2 ∑ m∈Mkc ( γ ∥∥∆qkm∥∥22 + γ ∥∥εkm∥∥22) (c) ⩽ βγ|Mkc |2 α2M2 ∥∥∥θk − θk−1∥∥∥2 2 ⩽ βγ α2 ∥∥∥θk − θk−1∥∥∥2 2 , (31) where (a) follows Assumption B.1, (b) follows γ is larger than 1 by definition, and (c) utilizes our novel trigger condition (12). Lemma B.2. From Definition 3.1, we can derive that the relationship between quantized gradient innovation ∆qkm and its quantization representation ψ k m which applies b k m bits for each dimension: ∆qkm = 2τ k mR k mψ k m −Rkm1, (32) where 1 ∈ Rd denotes a vector filled with scalar value 1. Remark:We can utilize (32) to calculate the quantized gradient innovation in the experimental implementation. C MISSING PROOF OF LEMMA 3.1 AND THE DERIVATION OF bkm With lazy aggregation, the actual aggregated model at epoch k is: θk+1 = θk − α M ∑ m∈Mk ( qk−1m +∆q k m ) − α M ∑ m∈Mkc qk−1m . (33) Suppose ∆km denotes the quantization loss of device m at epoch k and ψ k m denotes the quantization representation of local gradient innovation as in Definition 3.1, i.e., ∆km = ψ k m − ∇fm ( θk ) − qk−1m +Rkm1 2τkmR k m − 1 2 1 (34) With (7), (33), and (34), the model deviation ∥θ̃ k − θk∥22 caused by skipping gradients can be written as: ∥∥∥θ̃k − θk∥∥∥2 2 = ∥∥∥∥∥∥ αM ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 = ∥∥∥∥∥∥ αM ∑ m∈Mkc ( 2τkmR k mψ k m −Rkm1 )∥∥∥∥∥∥ 2 2 (25) ⩽ α2|Mkc | M2 ∑ m∈Mkc ∥∥∥2τkmRkmψkm −Rkm1∥∥∥2 2 (34) ⩽ α2|Mkc | M2 ∑ m∈Mkc (∥∥∥∇fm(θk)− qk−1m +Rkm1+ τkmRkm1+∆km −Rkm1∥∥∥2 2 ) (34) ⩽ 2α2|Mkc | M2 ∑ m∈Mkc (∥∥∥∇fm(θk)− qk−1m + τkmRkm1∥∥∥2 2 + ∥∥∆km∥∥22) (a) ⩽ 2α2|Mkc | M2 ∑ m∈Mkc (∥∥∥∇fm(θk)− qk−1m + τkmRkm1∥∥∥2 2 + d ) (28) ⩽ 2α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 + ∥∥τkmRkm1∥∥2)2 + d) = 2α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2 + 2 ∥∥τkmRkm1∥∥2)2 + d) ⩽ 4α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 + 4 ∥∥τkmRkm1∥∥22 + d2 ) (b) ⩽ 4α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 + 4(Rkm)2d+ d2 ) , (35) where 1 ∈ Rd denotes the vector filled with scalar value 1, (a) ∆km ∈ (−1, 0], (b) Rkm ⩾ τkmRkm ⩾ 0. Since Rkm is independent of τ k m, we can formulate an optimization problem about τ k m for device m at communication round k as follows: min 0<τkm⩽1 (∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 (36) Therefore, the optimal solution of τkm in (36) is (τkm) ∗ = ∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 Rkm √ d . (37) Then, the optimal adaptive quantization level (bkm) ∗ is equal to (bkm) ∗ = ⌊ log2( 1 (τkm) ∗ + 1) ⌋ = log2 Rkm√d∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 + 1 (38) Notice that (bkm) ∗ ⩾ 1 is always true since (τkm) ∗ ⩽ 1 D MISSING PROOF OF LEMMA 4.1, THEOREM 4.1 AND COROLLARY 4.1. Proof. Suppose Assumptions 4.1, 4.2, and 4.3 are satisfied and Mkc ̸= ∅. For the simplicity of the convergence proof, we assume Φk = 1M ∑ m∈Mkc ∆qkm. First, we prove Lemma 4.1. f(θk+1)− f(θk) ⩽ 〈 ∇f(θk),θk+1 − θk 〉 + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 = 〈 ∇f(θk),−α ( ∇f(θk)− εk − Φk )〉 + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 =− α ∥∥∥∇f(θk)∥∥∥2 2 + α 〈 ∇f(θk), εk +Φk 〉 + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 (26) = − α ∥∥∥∇f(θk)∥∥∥2 2 + α 2 (∥∥∥∇f(θk)∥∥∥2 2 + ∥∥εk +Φk∥∥2 2 − 1 α2 ∥∥∥θk+1 − θk∥∥∥2 2 ) + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 ⩽− α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α 2 ∥∥εk +Φk∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 (25) ⩽ − α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α ∥∥εk∥∥2 2 + α ∥∥Φk∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 . (39) Hence, we have f(θk+1)− f(θk) (30) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 , (40) which gives us Theorem 4.1. Sum it up for k = 1, 2, · · · ,K, we have f(θK+1)− f(θ1) ⩽− α 2 K∑ k=1 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θK+1 − θK∥∥∥2 2 + K−1∑ k=1 ( L 2 − 1 2α + βγ α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥θ1 − θ0∥∥2 2 . (41) Notice that inequality (41) holds for both Mkc ̸= ∅ and Mkc = ∅. Therefore, for ( L 2 − 1 2α + βγ α ) ⩽ 0 and all hyperparameters are chosen properly, considering the minimum of ∥∇f(θk)∥22 min k=1,2,··· ,K ∥∥∥∇f(θk)∥∥∥2 2 ⩽ 1 K K∑ k=1 ∥∥∥∇f(θk)∥∥∥2 2 (41) ⩽ 2 αK ( f(θ1)− f(θK) + βγ α ∥∥θ1 − θ0∥∥2 2 ) . (42) For ( L 2 − 1 2α + βγ α ) ⩽ 0 and all hyperparameters are chosen properly, we have that min k=1,2,··· ,K ∥∥∥∇f(θk)∥∥∥2 2 ⩽ 2 αK ( f(θ1)− f(θ∗) + βγ α ∥∥θ1 − θ0∥∥2 2 ) ⩽ ϵ2, (43) which demonstrates AQUILA requires K = O ( 2ω1 αϵ2 ) communication round with ω1 = f(θ1)− f(θ∗) + βγα ∥∥θ1 − θ0∥∥2 2 to achieve mink=1,2,··· ,K ∥∥∥∇f(θk)∥∥∥2 2 ⩽ ϵ2. E MISSING PROOF OF COROLLARY 4.1 WHEN Mkc = ∅. Proof. Since the skipping subset of devices are the empty set, from (5), we have θk+1 − θk = − α M ∑ m∈Mk ( qk−1m +∆q k m ) − α M ∑ m∈Mkc qk−1m =− α M ∑ m∈M ( qk−1m +∆q k m ) (11) = − α M ∑ m∈M ( ∇fm(θk)− εkm ) =− α ( ∇f(θk)− εk ) . (44) From (14) we have: f(θk+1)− f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + α ∥∥εk∥∥2 2 ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + α ∥∥εk∥∥2 2 (27) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α2 ( L 2 − 1 2α )( (1 + p) ∥∥∥∇f(θk)∥∥∥2 2 + ( 1 + p−1 ) ∥∥εk∥∥2 2 ) + α ∥∥εk∥∥2 2 = −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + 1 2 ( α2L− α ) (1 + p) ∥∥∥∇f(θk)∥∥∥2 2 + 1 2 ( α2L− α ) ( 1 + p−1 ) ∥∥εk∥∥2 2 + α ∥∥εk∥∥2 2 = α 2 ((αL− 1) (1 + p)− 1) ∥∥∥∇f(θk)∥∥∥2 2 + α 2 ( (αL− 1) ( 1 + p−1 ) + 2 ) ∥∥εk∥∥2 2 . (45) If the factor of ∥∥εk∥∥2 2 in (45) is less than or equal to 0, that is, (αL− 1) ( 1 + p−1 ) + 2 ⩽ 0, (46) then the factor of ∥∇f(θk)∥22 will be less than −α2 , which indicates that f(θk+1)− f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 . (47) Note that it is not difficult to demonstrate that (46) and L2 − 1 2α + βγ α ⩽ 0 can actually be satisfied at the same time. For instance, we can set p = 0.1, α = 0.1, β = 0.25, γ = 2, L = 2.5 that satisfies both of them. F MISSING PROOF OF THEOREM 4.2. Proof. Based on the intermediate result (40) of Theorem 4.1 and Assumption 4.4 (µ−PŁ condition), we have f(θk+1)− f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 (19) ⩽ −αµ(f(θk)− f(θ∗)) + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 , (48) which is equivalent to f(θk+1)− f(θ∗) (19) ⩽ (1− αµ)(f(θk)− f(θ∗)) + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 . (49) Suppose βγα ⩽ (1− αµ) ( 1 2α − L 2 ) , we can show that f(θk+1)− f(θ∗) + ( 1 2α − L 2 )∥∥∥θk+1 − θk∥∥∥2 2 ⩽ (1− αµ) ( f(θk)− f(θ∗) + ( 1 2α − L 2 )∥∥∥θk − θk−1∥∥∥2 2 ) . (50) Therefore, after multiply k = 1, 2, · · · ,K, we have f(θK+1)− f(θ∗) + ( 1 2α − L 2 )∥∥∥θK+1 − θK∥∥∥2 2 ⩽ (1− αµ)K ( f(θ1)− f(θ∗) + ( 1 2α − L 2 )∥∥θ1 − θ0∥∥2 2 ) ⩽ ϵ, (51) which demonstrates AQUILA requires K = O ( − 1log(1−αµ) log ω1 ϵ ) communication round with ω1 = f(θ 1)− f(θ∗) + ( 1 2α − L 2 ) ∥θ1 − θ0∥22 to achieve f(θ K+1)− f(θ∗) + ( 1 2α − L 2 ) ∥θK+1 − θK∥22 ⩽ ϵ.
1. What is the main contribution of the proposed method in federated learning? 2. What are the strengths and weaknesses of the paper, particularly regarding its novelty and improvements over prior works? 3. Do you have any concerns about the assumptions made in the paper, such as Assumption 4.3? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the comparison between the proposed method and existing works, such as LAQ and LAG?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a framework for federated learning by adjusting the frequency of communication between agents and the server with an adaptive quantization scheme. Specifically, the authors combine two quantization schemes, namely, the adaptive quantization rule (AdaQuantFL) and lazily aggregated quantization (LAQ). Strengths And Weaknesses Weaknesses: My main concern is regarding the novelty of the proposed method and also, in improving the existing results. This paper seems to be significantly built on a prior work "Lazily Aggregated Quantized Gradient Innovation for Communication-Efficient Federated Learning". It is not clear how the performance of LAQ in the mentioned work is improved in theory. The authors only mention a comparison after Theorem 4.2: " Theorem 4.2 informs us that AQUILA can achieve a linear convergence rate as in LAG (Chen et al., 2018) and LAQ when certain assumptions are satisfied." I wonder what would be the motivation to use the proposed AQUILA method if it achieves the same performance for already existing LAG and LAQ methods. The paper considers a (not class) specific quantizer which limits the scope of the theoretical and simulation results. Assumption 4.3 states that all devices’ quantization errors are constrained by the total error of the omitted devices. The justification for this assumption is presented as follows: This assumption is easy to verify when M_c≠∅ this a bounded variable will always be bounded by a part of itself multiplied by a real number. This statement is not accurate and should be: This assumption is easy to verify when M_c≠∅ this a bounded variable will always be bounded by a non-zero part of itself multiplied by a real number. So, how can we guarantee that the total error of the omitted devices is non-zero? Clarity, Quality, Novelty And Reproducibility The paper is marginally novel compared to the existing work on quantized federated learning in the following works: Jun Sun, Tianyi Chen, Georgios B Giannakis, Qinmin Yang, and Zaiyue Yang. Lazily aggregated quantized gradient innovation for communication-efficient federated learning. IEEE Transactions on Pattern Analysis & Machine Intelligence, pp. 1–15, 2020. Tianyi Chen, Georgios B Giannakis, Tao Sun, and Wotao Yin. Lag: Lazily aggregated gradient for communication-efficient distributed learning. In Proceedings of Advances in Neural Information Processing Systems, pp. 1–25, 2018.
ICLR
Title AQUILA: Communication Efficient Federated Learning with Adaptive Quantization of Lazily-Aggregated Gradients Abstract The development and deployment of federated learning (FL) have been bottlenecked by heavy communication overheads of high-dimensional models between the distributed device nodes and the central server. To achieve better errorcommunication trade-offs, recent efforts have been made to either adaptively reduce the communication frequency by skipping unimportant updates, e.g., lazy aggregation, or adjust the quantization bits for each communication. In this paper, we propose a unifying communication efficient framework for FL based on adaptive quantization of lazily-aggregated gradients (AQUILA), which adaptively balances two mutually-dependent factors, the communication frequency, and the quantization level. Specifically, we start with a careful investigation of the classical lazy aggregation scheme and formulate AQUILA as an optimization problem where the optimal quantization level is selected by minimizing the model deviation caused by update skipping. Furthermore, we devise a new lazy aggregation strategy to better fit the novel quantization criterion and retain the communication frequency at an appropriate level. The effectiveness and convergence of the proposed AQUILA framework are theoretically verified. The experimental results demonstrate that AQUILA can reduce around 60% of overall transmitted bits compared to existing methods while achieving identical model performance in a number of non-homogeneous FL scenarios, including Non-IID data and heterogeneous model architecture. 1 INTRODUCTION With the deployment of ubiquitous sensing and computing devices, the Internet of things (IoT), as well as many other distributed systems, have gradually grown from concept to reality, bringing dramatic convenience to people’s daily life (Du et al., 2020; Liu et al., 2020; Hard et al., 2018). To fully utilize such distributed computing resources, distributed learning provides a promising framework that can achieve comparable performance with the traditional centralized learning scheme. However, the privacy and security of sensitive data during the updating and transmission processes in distributed learning have been a growing concern. In this context, federated learning (FL) (McMahan et al., 2017) has been developed, allowing distributed devices to collaboratively learn a global model without privacy leakage by keeping private data isolated and masking transmitted information with secure approaches. On account of its privacy-preserving property and great potentiality in some distributed but privacy-sensitive fields such as finance and health, FL has attracted tremendous attention from both academia and industry in recent years. Unfortunately, in many FL applications, such as image classification and objective recognition, the trained model tends to be high-dimensional, resulting in significant communication costs. Hence, communication efficiency has become one of the key bottlenecks of FL. To this end, Sun et al. (2020) proposes the lazily-aggregated quantization (LAQ) method to skip unnecessary parameter uploads by estimating the value of gradient innovation — the difference between the current unquantized gradient and the previously quantized gradient. Moreover, Mao et al. (2021) devises an adaptive quantized gradient (AQG) strategy based on LAQ to dynamically select the quantization level within some artificially given numbers during the training process. Nevertheless, the AQG is still not sufficiently adaptive because the pre-determined quantization levels are difficult to choose in complicated FL environments. In another separate line of work, Jhunjhunwala et al. (2021) introduces an adaptive quantization rule for FL (AdaQuantFL), which searches in a given range for an optimal quantization level and achieves a better error-communication trade-off. Most previous research has investigated optimizing communication frequency or adjusting the quantization level in a highly adaptive manner, but not both. Intuitively, we ask a question, can we adaptively adjust the quantization level in the lazy aggregation fashion to simultaneously reduce transmitted amounts and communication frequency? In the paper, we intend to select the optimal quantization level for every participated device by optimizing the model deviation caused by skipping quantized gradient updates (i.e., lazy aggregation), which gives us a novel quantization criterion cooperated with a new proposed lazy aggregation strategy to reduce overall communication costs further while still offering a convergence guarantee. The contributions of this paper are trifold. • We propose an innovative FL procedure with adaptive quantization of lazily-aggregated gradients termed AQUILA, which simultaneously adjusts the communication frequency and quantization level in a synergistic fashion. • Instead of naively combining LAQ and AdaQuantFL, AQUILA owns a completely different device selection method and quantitative level calculation method. Specifically, we derive an adaptive quantization strategy from a new perspective that minimizes the model deviation introduced by lazy aggregation. Subsequently, we present a new lazy aggregation criterion that is more precise and saves more device storage. Furthermore, we provide a convergence analysis of AQUILA under the generally non-convex case and the Polyak-Łojasiewicz condition. • Except for normal FL settings, such as independent and identically distributed (IID) data environment, we experimentally evaluate the performance of AQUILA in a number of non-homogeneous FL settings, such as non-independent and non-identically distributed (Non-IID) local dataset and various heterogeneous model aggregations. The evaluation results reveal that AQUILA considerably mitigates the communication overhead compared to a variety of state-of-art algorithms. 2 BACKGROUND AND RELATED WORK Consider an FL system with one central parameter server and a device set M with M = |M| distributed devices to collaboratively train a global model parameterized by θ ∈ Rd. Each device m ∈ M has a private local dataset Dm = {(x(m)1 ,y (m) 1 ), · · · , (x (m) nm ,y (m) nm )} of nm samples. The federated training process is typically performed by solving the following optimization problem min θ∈Rd f(θ) = 1 M M∑ m=1 fm(θ) with fm(θ) = [l (hθ(x),y)](x,y)∼Dm , (1) where f : Rd → R denotes the empirical risk, fm : Rd → R denotes the local objective based on the private data Dm of the device m, l denotes the local loss function, and hθ denotes the local model. The FL training process is conducted by iteratively performing local updates and global aggregation as proposed in (McMahan et al., 2017). First, at communication round k, each device m receives the global model θk from the parameter server and trains it with its local data Dm. Subsequently, it sends the local gradient ∇fm(θk) to the central server, and the server will update the global model with learning rate α by θk+1 := θk − α M ∑ m∈M ∇fm(θk). (2) Definition 2.1 (Quantized gradient innovation). For more efficiency, each device only uploads the quantized deflection between the full gradient ∇fm(θk) and the last quantization value qk−1m utilizing a quantization operator Q : Rd → Rd, i.e., ∆qkm = Q(∇fm(θ k)− qk−1m ). (3) For communication frequency reduction, the lazy aggregation strategy allows the device m ∈ M to upload its newly-quantized gradient innovation at epoch k only when the change in local gradient is sufficiently larger than a threshold. Hence, the quantization of the local gradient qkm of device m at epoch k can be calculated by qkm := qk−1m , if ∥∥∥Q(∇fm (θk)− qk−1m )∥∥∥2 2 ⩽ Threshold qk−1m +∆q k m, otherwise . (4) If the device m skips the upload of ∆qkm, the central server will reuse the last gradient q k−1 m for aggregation. Therefore, the global aggregation rule can be changed from (2) to: θk+1 = θk − α M ∑ m∈M qkm = θ k − α M ∑ m∈Mk ( qk−1m +∆q k m ) − α M ∑ m∈Mkc qk−1m , (5) where Mk denotes the subset of devices that upload their quantized gradient innovation, and Mkc = M \ Mk denotes the subset of devices that skip the gradient update and reuse the old quantized gradient at epoch k. For AdaQuantFL, it is proposed to achieve a better error-communication trade-off by adaptively adjusting the quantization levels during the FL training process. Specifically, AdaQuantFL computes the optimal quantization level (bk)∗ by (bk)∗ = ⌊ √ f(θ0)/f(θk) · b0⌋, where f(θ0) and f(θk) are the global objective loss defined in (1). However, AdaQuantFL transmits quantized gradients at every communication round. In order to skip unnecessary communication rounds and adaptively adjust the quantization level for each communication jointly, a naive approach is to quantize lazily aggregated gradients with AdaQuantFL. Nevertheless, it fails to achieve efficient communication for several reasons. First, given the descending trend of training loss, AdaQuantFL’s criterion may lead to a high quantization bit number even exceeding 32 bits in the training process (assuming a floating point is represented by 32 bits in our case), which is too large for cases where the global convergence is already approaching and makes the quantization meaningless. Second, a higher quantization level results in a smaller quantization error, leading to a lower communication threshold in the lazy aggregation criterion (4) and thus a higher transmission frequency. Consequently, it is desirable to develop a more efficient adaptive quantization method in the lazilyaggregated setting to improve communication efficiency in FL systematically. 3 ADAPTIVE QUANTIZATION OF LAZILY-AGGREGATED GRADIENTS Given the above limitations of the naive joint use of the existing adaptive quantization criterion and lazy aggregation strategy, this paper aims to design a unifying procedure for communication efficiency optimization where the quantization level and communication frequency are considered synergistically and interactively. 3.1 OPTIMAL QUANTIZATION LEVEL First, we introduce the definition of a deterministic rounding quantizer and a fully-aggregated model. Definition 3.1. (Deterministic mid-tread quantizer.) Every element of the gradient innovation of device m at epoch k is mapped to an integer [ψkm]i as[ ψkm ] i = [ ∇fm(θk) ] i − [ qk−1m ] i +Rkm 2τkmR k m + 1 2 ,∀i ∈ {1, 2, ..., d}, (6) where ∇f(θkm) denotes the current unquantized gradient, Rkm = ∥∇fm(θ k) − qk−1m ∥∞ denotes the quantization range, bkm denotes the quantization level, and τ k m := 1/(2 bkm − 1) denotes the quantization granularity. More explanations on this quantizer are exhibited on Appendix A.2. Definition 3.2 (Fully-aggregated model). The fully-aggregated model θ̃ without lazy aggregation at epoch k is computed by θ̃ k+1 = θk − α M ∑ m∈M ( qk−1m +∆q k m ) . (7) Lemma 3.1. The influence of lazy aggregation at communication round k can be bounded by ∥∥∥θ̃k−θk∥∥∥2 2 ⩽ 4α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)−qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2+4(Rkm)2d+ d2 ) . (8) Corresponding to Lemma 3.1, since Rkm is independent of τ k m, we can formulate an optimization problem to minimize the upper bound of this model deviation caused by update skipping for each device m: minimize 0<τkm⩽1 (∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 subject to τkm = 1( 2b k m − 1 ) . (9) Solving the below optimization problem gives AQUILA an adaptive strategy (10) that selects the optimal quantization level based on the quantization range Rkm, the dimension d of the local model, the current gradient ∇fm(θk), and the last uploaded quantized gradient qk−1m : (bkm) ∗ = log2 Rkm√d∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 + 1 . (10) The superiority of (10) comes from the following three aspects. First, since Rkm ⩾ [∇fm(θ k)]i − [qk−1m ]i ⩾ −Rkm, the optimal quantization level (bkm)∗ must be greater than or equal to 1. Second, AQUILA can personalize an optimal quantization level for each device corresponding to its own gradient, whereas, in AdaQuantFL, each device merely utilizes an identical quantization level according to the global loss. Third, the gradient innovation and quantization range Rkm tend to fluctuate along with the training process instead of keeping descending, and thus prevent the quantization level from increasing tremendously compared with AdaQuantFL. 3.2 PRECISE LAZY AGGREGATION CRITERION Definition 3.3 (Quantization error). The global quantization error εk is defined by the subtraction between the current unquantized gradient ∇f(θk) and its quantized value qk−1 +∆qk, i.e., εk = ∇f(θk)− qk−1 −∆qk, (11) where ∇f(θk) = ∑ m∈M ∇fm(θ k), qk−1 = ∑ m∈M q k−1 m ,∆q k = ∑ m∈M ∆q k m. To better fit the larger quantization errors induced by fewer quantization bits in (10), AQUILA possesses a new communication criterion to avoid the potential expansion of the devices group being skipped: ∥∥∆qkm∥∥22 + ∥∥εkm∥∥22 ⩽ βα2 ∥∥∥θk − θk−1∥∥∥22 ,∀m ∈ Mkc , (12) where β ⩾ 0 is a tuning factor. Note that this skipping rule is employed at epoch k, in which each device m calculates its quantized gradient innovation ∆qkm and quantization error ε k m, then utilizes this rule to decide whether uploads ∆qkm. The comparison of AQUILA’s skip rule and LAQ’s is also shown in Appendix A.2. Instead of storing a large number of previous model parameters as LAQ, the strength of (12) is that AQUILA directly utilizes the global model for two adjacent rounds as the skip condition, which does not need to estimate the global gradient (more precise), requires fewer hyperparameters to adjust, and considerably reduces the storage pressure of local devices. This is especially important for small-capacity devices (e.g., sensors) in practical IoT scenarios. Algorithm 1 Communication Efficient FL with AQUILA Input: the number of communication rounds K, the learning rate α. Initialize: the initial global model parameter θ0. 1: Server broadcasts θ0 to all devices. ▷ For the initial round k = 0. 2: for each device m ∈ M in parallel do 3: Calculates local gradient ∇fm(θ0). 4: Compute (b0m) ∗ by setting qk−1m = 0 in (10) and the quantized gradient innovation ∆q 0 m, and transmits it back to the server side. 5: end for 6: for k = 1, 2, ...,K do 7: Server broadcasts θk to all devices. 8: for each device m ∈ M in parallel do 9: Calculates local gradient ∇fm(θk), the optimal local quantization level (bkm)∗ by (10), and the quantized gradient innovation ∆qkm. 10: if (12) does not hold for device m then ▷ If satisfies, skip uploading. 11: device m transmits ∆qkm to the server. 12: end if 13: end for 14: Server updates θk+1 by the saving previous global quantized gradient qk−1m and the received quantized gradient innovation ∆qkm: θ k+1 := θk − α ( qk−1 + 1/M ∑ m∈Mk ∆q k m ) . 15: Server saves the average quantized gradient qk for the next aggregation. 16: end for The detailed process of AQUILA is comprehensively summarized in Algorithm 1. At epoch k = 0, each device calculates b0m by setting q k−1 0 = 0 and uploads ∆q k 0 to the server since the (12) is not satisfied. At epoch k ∈ {1, 2, ...,K}, the server first broadcasts the global model θk to all devices. Each device m computes ∇f(θkm) with local training data and then utilizes it to calculate an optimal quantization level by (10). Subsequently, each device computes its gradient innovation after quantization and determines whether or not to upload based on the communication criterion (12). Finally, the server updates the new global model θk+1 with up-to-date quantized gradients qk−1m +∆q k m for those devices who transmit the uploads at epoch k, while reusing the old quantized gradients qk−1m for those who skip the uploads. 4 THEORETICAL DERIVATION AND ANALYSIS OF AQUILA As aforementioned, we bound the model deviation caused by skipping updates with respect to quantization bits. Specifically, if the communication criterion (12) holds for the device m at epoch k, it does not contribute to epoch k’s gradient. Otherwise, the loss caused by device m will be minimized with the optimal quantization level selection criterion (10). In this section, the theoretical convergence derivation of AQUILA is based on the following standard assumptions. Assumption 4.1 (L-smoothness). Each local objective function fm is Lm-smooth, i.e., there exist a constant Lm > 0, such that ∀x,y ∈ Rd, ∥∇fm(x)−∇fm(y)∥2 ⩽ Lm ∥x− y∥2 , (13) which implies that the global objective function f is L-smooth with L ≤ L̄ = 1m ∑m i=1 Lm. Assumption 4.2 (Uniform lower bound). For all x ∈ Rd, there exist f∗ ∈ R such that f(x) ≥ f∗. Lemma 4.1. Following the assumption that the function f is L-smooth, we have f(θk+1)−f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 +α ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ∥∥εk∥∥2 2 +(L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 . (14) 4.1 CONVERGENCE ANALYSIS FOR GENERALLY NON-CONVEX CASE. Theorem 4.1. Suppose Assumptions 4.1, 4.2, and B.1 (29) be satisfied. If Mkc ̸= ∅, the global objective function f satisfies f(θk+1)−f(θk)⩽−α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1−θk∥∥∥2 2 + βγ α ∥∥∥θk−θk−1∥∥∥2 2 . (15) Corollary 4.1. Let all the assumptions of Theorem 4.1 hold and L2 − 1 2α + βγ α ⩽ 0, then the AQUILA requires K = O ( 2ω1 αϵ2 ) (16) communication rounds with ω1=f ( θ1 ) −f (θ∗)+βγα ∥∥θ1−θ0∥∥2 2 to achieve mink ∥∇f(θk)∥22 ⩽ ϵ2. Compared to LAG. Corresponding to Eq.(70) in Chen et al. (2018), LAG defines a Lyapunov function Vk := f(θk)− f(θ∗) + ∑D d=1 βd∥θ k+1−d − θk−d∥22 and claims that it satisfies Vk+1 − Vk ≤ − (α 2 − c̃ (α, β1) (1 + ρ)α2 )∥∥∥∇f(θk)∥∥∥2 2 , (17) where c̃ (α, β1) = L/2 − 1/(2α) + β1, β1 = Dξ/(2αη), ξ < 1/D, and ρ > 0. The above result (17) indicates that LAG requires KLAG = O ( 2ω1 (α− 2c̃ (α, β1) (1 + ρ)α2) ϵ2 ) (18) communication rounds to converge. Since the non-negativity of the term c̃ (α, β1) (1 + ρ)α2, we can readily derive that α < α − 2c̃ (α, β1) (1 + ρ)α2, which demonstrates AQUILA achieves a better convergence rate than LAG with the appropriate selection of α. 4.2 CONVERGENCE ANALYSIS UNDER POLYAK-ŁOJASIEWICZ CONDITION. Assumption 4.3 (µ−PŁ condition). Function f satisfies the PL condition with a constant µ > 0, that is, ∥∥∥∇f(θk)∥∥∥2 2 ⩾ 2µ(f(θk)− f(θ∗)). (19) Theorem 4.2. Suppose Assumptions 4.1, 4.2, and 4.3 be satisfied and Mkc ̸= ∅, if the hyperparameters satisfy βγα ⩽ (1− αµ) ( 1 2α − L 2 ) , then the global objective function satisfies f(θk+1)−f(θk)⩽−αµ(f(θk)−f(θ∗))+ ( L 2 − 1 2α )∥∥∥θk+1−θk∥∥∥2 2 + βγ α ∥∥∥θk−θk−1∥∥∥2 2 , (20) and the AQUILA requires K = O ( − 1 log(1− αµ) log ω1 ϵ ) (21) communication round with ω1 = f(θ 1) − f(θ∗) + ( 1 2α − L 2 ) ∥∥θ1 − θ0∥∥2 2 to achieve f(θK+1) − f(θ∗) + ( 12α − L 2 )∥θ K+1 − θK∥22 ⩽ ϵ. Compared to LAG. According to Eq.(50) in Chen et al. (2018), we have that VK ≤ ( 1− αµ+ αµ √ Dξ )K V0, (22) where ξ < 1/D. Thus, we have that LAG requires KLAG = O ( − 1 log(1− αµ+ αµ √ Dξ) log ω1 ϵ ) (23) communication rounds to converge. Compared to Theorem 4.2, we can derive that log(1− αµ) < log(1− αµ+ αµ √ Dξ), which indicates that AQUILA has a faster convergence than LAG under the PŁ condition. Remark. We want to emphasize that LAQ introduces the Lyapunov function into its proof, making it extremely complicated. In addition, LAQ can only guarantee that the final objective function converges to a range of the optimal solution rather than an accurate optimum f(θ∗). Nevertheless, as discussed in Section 3.2, we utilize the precise model difference in AQUILA as a surrogate for the global gradient and thus simplify the proof. 5 EXPERIMENTS AND DISCUSSION 5.1 EXPERIMENT SETUP In this paper, we evaluate AQUILA on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and WikiText-2 dataset (Merity et al., 2016), considering IID, Non-IID data scenario, and heterogeneous model architecture (which is also a crucial challenge in FL) simultaneously. The FL environment is simulated in Python 3.9 with PyTorch 11.1 (Paszke et al., 2019) implementation. For the diversity of the neural network structures, we train ResNet-18 (He et al., 2016) at CIFAR-10 dataset, MobileNet-v2 (Sandler et al., 2018) at CIFAR-100 dataset, and Transformer (Vaswani et al., 2017) at WikiText-2 dataset. As for the FL system setting, in the majority of our experiments, the whole system exists M = 10 total devices. However, considering the large-scale feature of FL, we also validate AQUILA on a larger system of M = 100/80 total devices for CIFAR / WikiText-2 dataset. The hyperparameters and additional details of our experiments are revealed in Appendix A.3. 5.2 HOMOGENEOUS ENVIRONMENT We first evaluate AQUILA with homogeneous settings where all local models share the same model architecture as the global model. To better demonstrate the effectiveness of AQUILA, its performance is compared with several state-of-the-art methods, including AdaQuantFL, LAQ with fixed levels, LENA (Ghadikolaei et al., 2021), MARINA (Gorbunov et al., 2021), and the naive combination of AdaQuantFL with LAQ. Note that based on this homogeneous setting, we conduct both IID and NonIID evaluations on CIFAR-10 and CIFAR-100 dataset, and an IID evaluation on WikiText-2. To simulate the Non-IID FL setting as (Diao et al., 2020), each device is allocated two classes of data in CIFAR-10 and 10 classes of data in CIFAR-100 at most, and the amount of data for each label is balanced. The experimental results are presented in Fig. 1, where 100% implies all local models share a similar structure with the global model (i.e., homogeneity), 100% (80 devices) denotes the experiment is conducted in an 80 devices system, and LAdaQ represents the naive combination of AdaQuantFL and LAQ. For better illustration, the results have been smoothed by their standard deviation. The solid lines represent values after smoothing, and transparent shades of the same colors around them represent the true values. Additionally, Table 2 shows the total number of bits transmitted by all devices throughout the FL training process. The comprehensive experimental results are established in Appendix A.4. 5.3 NON-HOMOGENEOUS SCENARIO In this section, we also evaluate AQUILA with heterogeneous model structures as HeteroFL (Diao et al., 2020), where the structures of local models trained on the device side are heterogeneous. Suppose the global model at epoch k is θk and its size is d = wg ∗ hg, then the local model of each device m can be selected by θkm = θ k [: wm, : hm], where wm = rmwg and hm = rmhg, respectively. In this paper, we choose model complexity levels rm = 0.5. (f)(e)(d) (b)(a) (c) Most of the symbols in Fig. 2 are identical to the Fig. 1. 100%-50% is a newly introduced symbol that implies half of the devices share the same structure with the global model while another half only have 50% * 50% parameters as the global model. Performance Analysis. First of all, AQUILA achieves a significant transmission reduction compared to the naive combination of LAQ and AdaQuantFL in all datasets, which demonstrates the superiority of AQUILA’s efficiency. Specifically, Table 2 indicates that AQUILA saves 57.49% of transmitted bits in the system of 80 devices at the WikiText-2 dataset and reduces 23.08% of transmitted bits in the system of 100 devices at the CIFAR-100 dataset, compared to the naive combination. And other results in Table 3 also show an obvious reduction in terms of the total transmitted bits required for convergence. Second, in Fig. 1 and Fig. 2, the changing trend of AQUILA’s communication bits per each round clearly verifies the necessity and effectiveness of our well-designed adaptive quantization level and skip criterion. In these two figures, the number of bits transmitted in each round of AQUILA fluctuates a bit, indicating the effectiveness of AQUILA’s selection rule. Meanwhile, the value of transmitted bits remains at quite a low level, suggesting that the adaptive quantization principle makes training more efficient. Moreover, the figures also inform that the quantization level selected by AQUILA will not continuously increase during training instead of being as AdaQuantFL. In addition, based on these two figures, we can also conclude that AQUILA converges faster under the same communication costs. Finally, AQUILA is capable of adapting to a wide range of challenging FL circumstances. In the Non-IID scenario and heterogeneous model structure, AQUILA still outperforms other algorithms by significantly reducing overall transmitted bits while maintaining the same convergence property and objective function value. In particular, AQUILA reduces 60.4% overall communication costs compared to LENA and 57.2% compared to MARINA on average. These experimental results in non-homogeneous FL settings prove that AQUILA can be stably employed in more general and complicated FL scenarios. 5.4 ABLATION STUDY ON THE IMPACT OF TUNING FACTOR β One key contribution of AQUILA is presenting a new lazy aggregation criterion (12) to reduce communication frequency. In this part, we evaluate the effects of the loss performance of different tuning factor β value in Fig. 3. As β grows within a certain range, the convergence speed of the model will slow down (due to lazy aggregation). Still, it will eventually converge to the same model performance while considerably reducing the communication overhead. Nevertheless, increasing the value of β will lead to a decrease in the final model performance since it skips so many essential uploads that make the training deficient. The accuracy (perplexity) comparison of AQUILA with various selections of the tuning factor β is shown in Fig. 10, which indicates the same trend.To sum up, we should choose the value of factor β to maintain the model’s performance and minimize the total transmitted amount of bits. Specifically, we select the value of β = 0.1, 0.25, 1.25 for CIFAR-10, CIFAR-100, and WikiText-2 datasets for our evaluation, respectively. (f)(e)(d) (b)(a) (c) 6 CONCLUSIONS AND FUTURE WORK This paper proposes a communication-efficient FL procedure to simultaneously adjust two mutuallydependent degrees of freedom: communication frequency and quantization level. With the close cooperation of the novel adaptive quantization and adjusted lazy aggregation strategy derived in this paper, the proposed AQUILA has been proven to be capable of reducing the transmitted costs while maintaining the convergence guarantee and model performance compared to existing methods. The evaluation with Non-IID data distribution and various heterogeneous model architectures demonstrates that AQUILA is compatible in a non-homogeneous FL environment. REPRODUCIBILITY We present the overall theorem statements and proofs for our main results in the Appendix, as well as necessary experimental plotting figures. Furthermore, we submit the code of AQUILA in the supplementary material part, including all the hyperparameters and a requirements to help the public reproduce our experimental results. Our algorithm is straightforward, well-described, and easy to implement. ETHICS STATEMENT All evaluations of AQUILA are performed on publicly available datasets for reproducibility purposes. This paper empirically studies the performance of various state-of-art algorithms, therefore, probably introduces no new ethical or cultural problems. This paper does not utilize any new dataset. A APPENDIX The appendix includes supplementary experimental results, mathematical proof of the aforementioned theorems, and a detailed derivation of the novel adaptive quantization criterion and lazy aggregation strategy. Compared to Fig. 1 and Fig. 2 in the main text, the result figures in the appendix show a more comprehensive evaluation with AQUILA, which contains more detailed information including but not limited to accuracy vs steps and training loss vs steps curves. A.1 OVERALL FRAMEWORK OF AQUILA The cooperation of the novel adaptive quantization criterion (10) and lazy aggregation strategy (12) is illustrated in Fig. 4a. Compared to the naive combination of AdaQuantFL and LAQ, where the mutual influence between adaptive quantization and lazy aggregation has not been considered, as shown in Fig. 4b, AQUILA adaptively optimizes the allocation of quantization bits throughout training to promote the convergence of lazy aggregation, and at the same time utilizes the lazy aggregation strategy to improve the efficiency of adaptive quantization by compress the transmission with a lower quantization level. A.2 EXPLANATION OF THE QUANTIZER AND THE SKIP RULE OF LAQ’S The quantizer (6) is a deterministic quantizer that, at each dimension, maps the gradient innovation to the closest point at a one-dimensional grid. The range of the grid is Rkm, and the granularity is determined by quantization level τkm. Each dimension of gradient innovation is mapped to an integer in {0, 1, 2, 3, . . . , 2b − 1}. More precisely, the 1/2 ensures mapping to the closest integer instead of flooring to a smaller integer. The Rkm in the numerator ensures that the mapped integer is non-negative. As a result, when the gradient innovation is transmitted to the central server, 32 bits are used for the range, and b ∗ d bits are used for the mapped integer. Thus, 32+ b ∗ d bits are transmitted in total. The difference between (6) and (32) (Lemma B.2) is that (6) encodes the raw gradient innovation vector to an integer vector, whilst (32) decodes the integer vector to a quantized gradient innovation vector. Specifically, in the training process, each client utilizes (6) to encode the gradient innovation to an integer at each dimension, and afterwards, the integer vector ψkm and τ k m are sent to a central server. After receiving them, the central server can decode the quantized gradient innovation as (32) states. The skip rule of LAQ is measured by the summation of the accumulated model difference and quantization error: ∥∆qkm∥22 ⩽ 1 α2M2 D∑ d′=1 ξd′ ∥∥∥θk+1−d′ − θk−d′∥∥∥2 2 + 3 (∥∥εkm∥∥22 + ∥∥∥ε̂k−1m ∥∥∥22 ) , (24) where ξd′ is a series of manually selected scalars and D is also predetermined. εkm is the quantization error of client m at epoch k, and ε̂k−1m is the quantization of client m at last time it uploads its gradient innovation. Please refer to Sun et al. (2020) for more details on (24). In order to compute the LAQ skip threshold, each client has to store enormous previous information. The difference of AQUILA skipping criterion and LAQ skipping criterion is as follows. First, the AQUILA threshold is easier to compute for a local client. Compared to the LAQ skipping criterion, AQUILA skipping criterion is more concise and thus requires less storage and computing power. Second, the AQUILA criterion is easier to tune because much fewer hyperparameters are introduced. Compared to the LAQ criterion in which α, D and {ξd′}Dd′=1 are all manually selected, whilst only two hyperparameters α and β are introduced in the AQUILA criterion. Third, with the given threshold, AQUILA has a good theoretical property. The theoretical analysis of AQUILA is easier to follow with no Lyapunov function introduced as in LAQ. And the result also shows that AQUILA can achieve a better convergence rate under the non-convex case and the PL condition. A.3 EXPERIMENT SETUP In this section, we provide some extra hyperparameter settings for our evaluation. For the LAQ, we set D = 10 and ξ1 = ξ2 = · · · = ξD = 0.8/D as the same as the setting in their paper. For LENA, we set βLENA = 40 in their trigger condition. And for MARINA, we calculate the uploading probability of Bernoulli distribution as p = ξQ/d as announced in their paper. In addition, we choose the CrossEntropy function as our objective function in the experiment part. Table 1 shows the hyperparameter details of our evaluation. A.4 COMPREHENSIVE EXPERIMENT RESULTS This section will cover all the experimental results in our paper. B BASIC FACTS AND SOME LEMMAS Notations: Bold fonts denote vectors (e.g., θ). Normal fonts denote scalars (e.g., α). Subscript m is used to describe functions about a local device m (e.g., fm(θ)). A function without a subscript is used to describe an average among all devices (e.g., f(θ)). Frequently used norm inequalities Suppose n ∈ N+ and ∥ · ∥2 denotes the ℓ2−norm. For p in R+,xi,a, b ∈ Rd, there holds 1. Norm summation inequality. ∥∥∥∥∥ n∑ i=1 xi ∥∥∥∥∥ 2 2 ⩽ n n∑ i=1 ∥xi∥22 . (25) 2. Inner-product inequality. ⟨a, b⟩ = 1 2 ( ∥a∥22 + ∥b∥22 − ∥a− b∥22 ) . (26) 3. Young’s Inequality. ∥a+ b∥22 ⩽ (1 + p) ∥a∥ 2 2 + (1 + p −1) ∥b∥22 . (27) 4. Minkowski’s Inequality. ∥a+ b∥2 ⩽ ∥a∥2 + ∥b∥2 . (28) Assumption B.1. All devices’ quantization errors εk will be constrained by the total error of the omitted devices., i.e., ∀ k = 0, 1, · · · ,K, if Mkc ̸= ∅, ∃ γ ⩾ 1, such that ∥∥εk∥∥2 2 = ∥∥∥∥∥ 1M ∑ m∈M εkm ∥∥∥∥∥ 2 2 ⩽ γ M2 ∥∥∥∥∥∥ ∑ m∈Mkc εkm ∥∥∥∥∥∥ 2 2 , (29) where K denotes the termination time, and εkm = ∇fm(θ k) − ( qk−1m +∆q k m ) . This lemma is easy to verify when Mkc ̸= ∅, a bounded variable (here is εk) will always be bounded by a part of itself ( 1M ∑ m∈Mkc εkm) multiplied by a real number (γ). Note that there is another nontrivial scenario that Mkc ̸= ∅ but εkm = 0 for all m ∈ Mkc , which implies that γ = 0 or not exists and conflicts with our assumption. However, this situation only happens when all entries of εkm = 0, i.e., [∇fm(θk)]i = [qk−1m ]i for all 0 ⩽ i ⩽ d. Lemma B.1. The summation of quantized gradient innovation and quantization error is bounded by the global model difference: ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ∥∥εk∥∥2 2 ⩽ βγ α2 ∥∥∥θk − θk−1∥∥∥2 2 , (30) Proof. ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ∥∥εk∥∥2 2 (a) ⩽ ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + γ ∥∥∥∥∥∥ 1M ∑ m∈Mkc εkm ∥∥∥∥∥∥ 2 2 (25) ⩽ |Mkc | ∑ m∈Mkc ∥∥∥∥ 1M∆qkm ∥∥∥∥2 2 + γ|Mkc | ∑ m∈Mkc ∥∥∥∥ 1M εkm ∥∥∥∥2 2 = |Mkc | M2 ∑ m∈Mkc (∥∥∆qkm∥∥22 + γ ∥∥εkm∥∥22) (b) ⩽ |Mkc | M2 ∑ m∈Mkc ( γ ∥∥∆qkm∥∥22 + γ ∥∥εkm∥∥22) (c) ⩽ βγ|Mkc |2 α2M2 ∥∥∥θk − θk−1∥∥∥2 2 ⩽ βγ α2 ∥∥∥θk − θk−1∥∥∥2 2 , (31) where (a) follows Assumption B.1, (b) follows γ is larger than 1 by definition, and (c) utilizes our novel trigger condition (12). Lemma B.2. From Definition 3.1, we can derive that the relationship between quantized gradient innovation ∆qkm and its quantization representation ψ k m which applies b k m bits for each dimension: ∆qkm = 2τ k mR k mψ k m −Rkm1, (32) where 1 ∈ Rd denotes a vector filled with scalar value 1. Remark:We can utilize (32) to calculate the quantized gradient innovation in the experimental implementation. C MISSING PROOF OF LEMMA 3.1 AND THE DERIVATION OF bkm With lazy aggregation, the actual aggregated model at epoch k is: θk+1 = θk − α M ∑ m∈Mk ( qk−1m +∆q k m ) − α M ∑ m∈Mkc qk−1m . (33) Suppose ∆km denotes the quantization loss of device m at epoch k and ψ k m denotes the quantization representation of local gradient innovation as in Definition 3.1, i.e., ∆km = ψ k m − ∇fm ( θk ) − qk−1m +Rkm1 2τkmR k m − 1 2 1 (34) With (7), (33), and (34), the model deviation ∥θ̃ k − θk∥22 caused by skipping gradients can be written as: ∥∥∥θ̃k − θk∥∥∥2 2 = ∥∥∥∥∥∥ αM ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 = ∥∥∥∥∥∥ αM ∑ m∈Mkc ( 2τkmR k mψ k m −Rkm1 )∥∥∥∥∥∥ 2 2 (25) ⩽ α2|Mkc | M2 ∑ m∈Mkc ∥∥∥2τkmRkmψkm −Rkm1∥∥∥2 2 (34) ⩽ α2|Mkc | M2 ∑ m∈Mkc (∥∥∥∇fm(θk)− qk−1m +Rkm1+ τkmRkm1+∆km −Rkm1∥∥∥2 2 ) (34) ⩽ 2α2|Mkc | M2 ∑ m∈Mkc (∥∥∥∇fm(θk)− qk−1m + τkmRkm1∥∥∥2 2 + ∥∥∆km∥∥22) (a) ⩽ 2α2|Mkc | M2 ∑ m∈Mkc (∥∥∥∇fm(θk)− qk−1m + τkmRkm1∥∥∥2 2 + d ) (28) ⩽ 2α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 + ∥∥τkmRkm1∥∥2)2 + d) = 2α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2 + 2 ∥∥τkmRkm1∥∥2)2 + d) ⩽ 4α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 + 4 ∥∥τkmRkm1∥∥22 + d2 ) (b) ⩽ 4α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 + 4(Rkm)2d+ d2 ) , (35) where 1 ∈ Rd denotes the vector filled with scalar value 1, (a) ∆km ∈ (−1, 0], (b) Rkm ⩾ τkmRkm ⩾ 0. Since Rkm is independent of τ k m, we can formulate an optimization problem about τ k m for device m at communication round k as follows: min 0<τkm⩽1 (∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 (36) Therefore, the optimal solution of τkm in (36) is (τkm) ∗ = ∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 Rkm √ d . (37) Then, the optimal adaptive quantization level (bkm) ∗ is equal to (bkm) ∗ = ⌊ log2( 1 (τkm) ∗ + 1) ⌋ = log2 Rkm√d∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 + 1 (38) Notice that (bkm) ∗ ⩾ 1 is always true since (τkm) ∗ ⩽ 1 D MISSING PROOF OF LEMMA 4.1, THEOREM 4.1 AND COROLLARY 4.1. Proof. Suppose Assumptions 4.1, 4.2, and 4.3 are satisfied and Mkc ̸= ∅. For the simplicity of the convergence proof, we assume Φk = 1M ∑ m∈Mkc ∆qkm. First, we prove Lemma 4.1. f(θk+1)− f(θk) ⩽ 〈 ∇f(θk),θk+1 − θk 〉 + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 = 〈 ∇f(θk),−α ( ∇f(θk)− εk − Φk )〉 + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 =− α ∥∥∥∇f(θk)∥∥∥2 2 + α 〈 ∇f(θk), εk +Φk 〉 + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 (26) = − α ∥∥∥∇f(θk)∥∥∥2 2 + α 2 (∥∥∥∇f(θk)∥∥∥2 2 + ∥∥εk +Φk∥∥2 2 − 1 α2 ∥∥∥θk+1 − θk∥∥∥2 2 ) + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 ⩽− α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α 2 ∥∥εk +Φk∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 (25) ⩽ − α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α ∥∥εk∥∥2 2 + α ∥∥Φk∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 . (39) Hence, we have f(θk+1)− f(θk) (30) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 , (40) which gives us Theorem 4.1. Sum it up for k = 1, 2, · · · ,K, we have f(θK+1)− f(θ1) ⩽− α 2 K∑ k=1 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θK+1 − θK∥∥∥2 2 + K−1∑ k=1 ( L 2 − 1 2α + βγ α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥θ1 − θ0∥∥2 2 . (41) Notice that inequality (41) holds for both Mkc ̸= ∅ and Mkc = ∅. Therefore, for ( L 2 − 1 2α + βγ α ) ⩽ 0 and all hyperparameters are chosen properly, considering the minimum of ∥∇f(θk)∥22 min k=1,2,··· ,K ∥∥∥∇f(θk)∥∥∥2 2 ⩽ 1 K K∑ k=1 ∥∥∥∇f(θk)∥∥∥2 2 (41) ⩽ 2 αK ( f(θ1)− f(θK) + βγ α ∥∥θ1 − θ0∥∥2 2 ) . (42) For ( L 2 − 1 2α + βγ α ) ⩽ 0 and all hyperparameters are chosen properly, we have that min k=1,2,··· ,K ∥∥∥∇f(θk)∥∥∥2 2 ⩽ 2 αK ( f(θ1)− f(θ∗) + βγ α ∥∥θ1 − θ0∥∥2 2 ) ⩽ ϵ2, (43) which demonstrates AQUILA requires K = O ( 2ω1 αϵ2 ) communication round with ω1 = f(θ1)− f(θ∗) + βγα ∥∥θ1 − θ0∥∥2 2 to achieve mink=1,2,··· ,K ∥∥∥∇f(θk)∥∥∥2 2 ⩽ ϵ2. E MISSING PROOF OF COROLLARY 4.1 WHEN Mkc = ∅. Proof. Since the skipping subset of devices are the empty set, from (5), we have θk+1 − θk = − α M ∑ m∈Mk ( qk−1m +∆q k m ) − α M ∑ m∈Mkc qk−1m =− α M ∑ m∈M ( qk−1m +∆q k m ) (11) = − α M ∑ m∈M ( ∇fm(θk)− εkm ) =− α ( ∇f(θk)− εk ) . (44) From (14) we have: f(θk+1)− f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + α ∥∥εk∥∥2 2 ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + α ∥∥εk∥∥2 2 (27) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α2 ( L 2 − 1 2α )( (1 + p) ∥∥∥∇f(θk)∥∥∥2 2 + ( 1 + p−1 ) ∥∥εk∥∥2 2 ) + α ∥∥εk∥∥2 2 = −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + 1 2 ( α2L− α ) (1 + p) ∥∥∥∇f(θk)∥∥∥2 2 + 1 2 ( α2L− α ) ( 1 + p−1 ) ∥∥εk∥∥2 2 + α ∥∥εk∥∥2 2 = α 2 ((αL− 1) (1 + p)− 1) ∥∥∥∇f(θk)∥∥∥2 2 + α 2 ( (αL− 1) ( 1 + p−1 ) + 2 ) ∥∥εk∥∥2 2 . (45) If the factor of ∥∥εk∥∥2 2 in (45) is less than or equal to 0, that is, (αL− 1) ( 1 + p−1 ) + 2 ⩽ 0, (46) then the factor of ∥∇f(θk)∥22 will be less than −α2 , which indicates that f(θk+1)− f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 . (47) Note that it is not difficult to demonstrate that (46) and L2 − 1 2α + βγ α ⩽ 0 can actually be satisfied at the same time. For instance, we can set p = 0.1, α = 0.1, β = 0.25, γ = 2, L = 2.5 that satisfies both of them. F MISSING PROOF OF THEOREM 4.2. Proof. Based on the intermediate result (40) of Theorem 4.1 and Assumption 4.4 (µ−PŁ condition), we have f(θk+1)− f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 (19) ⩽ −αµ(f(θk)− f(θ∗)) + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 , (48) which is equivalent to f(θk+1)− f(θ∗) (19) ⩽ (1− αµ)(f(θk)− f(θ∗)) + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 . (49) Suppose βγα ⩽ (1− αµ) ( 1 2α − L 2 ) , we can show that f(θk+1)− f(θ∗) + ( 1 2α − L 2 )∥∥∥θk+1 − θk∥∥∥2 2 ⩽ (1− αµ) ( f(θk)− f(θ∗) + ( 1 2α − L 2 )∥∥∥θk − θk−1∥∥∥2 2 ) . (50) Therefore, after multiply k = 1, 2, · · · ,K, we have f(θK+1)− f(θ∗) + ( 1 2α − L 2 )∥∥∥θK+1 − θK∥∥∥2 2 ⩽ (1− αµ)K ( f(θ1)− f(θ∗) + ( 1 2α − L 2 )∥∥θ1 − θ0∥∥2 2 ) ⩽ ϵ, (51) which demonstrates AQUILA requires K = O ( − 1log(1−αµ) log ω1 ϵ ) communication round with ω1 = f(θ 1)− f(θ∗) + ( 1 2α − L 2 ) ∥θ1 − θ0∥22 to achieve f(θ K+1)− f(θ∗) + ( 1 2α − L 2 ) ∥θK+1 − θK∥22 ⩽ ϵ.
1. What is the main contribution of the paper, and how does it improve upon prior work in federated learning? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its communication efficiency and training performance? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the algorithm explanation, theoretical analysis, and experimental results? If so, what are they, and how could they be addressed?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a new communication efficient federated learning approach that combines adaptive quantization with lazy aggregation of gradients at each round to reduce the amount of data communicated as compared to prior work while providing comparable reduction in training loss. The approach is validated through theoretical analysis and experiments in both homogenous and heterogeneous federated learning settings. Strengths And Weaknesses Strengths: a) The main strength of the paper is that it combines adaptive quantization and lazy aggregation of gradients in a principled manner. Specifically, the quantization level at each round at a given node is clearly connected to the amount of new information (gradient innovation) being provided by the node instead of just depending on the loss value. b) The principled approach is then backed by analysis and experiments to show that it indeed reduces the amount of data being communicated while providing the same training performance in a variety of settings. Weaknesses: a) The main weakness according to me is that the explanation of the algorithm is currently not very clear. I have provided some comments/suggestion on improving this in the following section. b) While the main selling point for AQUILA is the reduction in the amount of data being communicated, the current theoretical results only show that the training converges at the same rate as standard GD. Any analytical result characterizing the expected reduction in amount of data being communicated would significantly strengthen this work. Alternately can you provide some explanation for why such an analysis is difficult? c) Likewise, from Figure 3 (d-f) it appears that the variance in the amount of data being communicated is also significantly lower in AQUILA which is a point in its favour, especially if the network cannot handle large fluctuations in the amount of communication. Can you provide some analysis/intuition for why that may be the case? Clarity, Quality, Novelty And Reproducibility The overall presentation is clear but the explanation of the algorithm is quite hard to follow as some background information is missing and the explanation is spread over Section 3 of the main paper and Appendices A1, B and C. I have the following suggestion for addressing this issue: a) Consider putting Algorithm 1 in the main paper. It is much easier to follow the explanation in Section 3 after reading Algorithm 1. If space is an issue, Figure 1 can be moved to the Appendix in my opinion. b) Please formally define gradient innovation, with the relevant notation, instead of just explaining it in one line in the introduction. The term is used several times in Section 3, and I don't think it is a commonly known term. b) Please provide some background or intuition behind (8) and (9). While these appear to be obtained from the quantizer in LAQ, I believe there should at least be some intuition provided for these expressions for the benefit of readers unfamiliar with LAQ. c) Please provide some justification for (10) in the main paper by suggesting how it follows from (8) and (9). Currently it seems to appear out of nowhere unless one reads the derivation in Appendix C. d) There is also no motivation for (11) and I still do not understand why it is a suitable criterion for device groups being skipped. Specifically, it seems to require knowledge of θ k + 1 at all worker nodes in round k but since the nodes do not have access to each other's gradients I do not see how they can calculate the threshold in (11). Please clarify.
ICLR
Title AQUILA: Communication Efficient Federated Learning with Adaptive Quantization of Lazily-Aggregated Gradients Abstract The development and deployment of federated learning (FL) have been bottlenecked by heavy communication overheads of high-dimensional models between the distributed device nodes and the central server. To achieve better errorcommunication trade-offs, recent efforts have been made to either adaptively reduce the communication frequency by skipping unimportant updates, e.g., lazy aggregation, or adjust the quantization bits for each communication. In this paper, we propose a unifying communication efficient framework for FL based on adaptive quantization of lazily-aggregated gradients (AQUILA), which adaptively balances two mutually-dependent factors, the communication frequency, and the quantization level. Specifically, we start with a careful investigation of the classical lazy aggregation scheme and formulate AQUILA as an optimization problem where the optimal quantization level is selected by minimizing the model deviation caused by update skipping. Furthermore, we devise a new lazy aggregation strategy to better fit the novel quantization criterion and retain the communication frequency at an appropriate level. The effectiveness and convergence of the proposed AQUILA framework are theoretically verified. The experimental results demonstrate that AQUILA can reduce around 60% of overall transmitted bits compared to existing methods while achieving identical model performance in a number of non-homogeneous FL scenarios, including Non-IID data and heterogeneous model architecture. 1 INTRODUCTION With the deployment of ubiquitous sensing and computing devices, the Internet of things (IoT), as well as many other distributed systems, have gradually grown from concept to reality, bringing dramatic convenience to people’s daily life (Du et al., 2020; Liu et al., 2020; Hard et al., 2018). To fully utilize such distributed computing resources, distributed learning provides a promising framework that can achieve comparable performance with the traditional centralized learning scheme. However, the privacy and security of sensitive data during the updating and transmission processes in distributed learning have been a growing concern. In this context, federated learning (FL) (McMahan et al., 2017) has been developed, allowing distributed devices to collaboratively learn a global model without privacy leakage by keeping private data isolated and masking transmitted information with secure approaches. On account of its privacy-preserving property and great potentiality in some distributed but privacy-sensitive fields such as finance and health, FL has attracted tremendous attention from both academia and industry in recent years. Unfortunately, in many FL applications, such as image classification and objective recognition, the trained model tends to be high-dimensional, resulting in significant communication costs. Hence, communication efficiency has become one of the key bottlenecks of FL. To this end, Sun et al. (2020) proposes the lazily-aggregated quantization (LAQ) method to skip unnecessary parameter uploads by estimating the value of gradient innovation — the difference between the current unquantized gradient and the previously quantized gradient. Moreover, Mao et al. (2021) devises an adaptive quantized gradient (AQG) strategy based on LAQ to dynamically select the quantization level within some artificially given numbers during the training process. Nevertheless, the AQG is still not sufficiently adaptive because the pre-determined quantization levels are difficult to choose in complicated FL environments. In another separate line of work, Jhunjhunwala et al. (2021) introduces an adaptive quantization rule for FL (AdaQuantFL), which searches in a given range for an optimal quantization level and achieves a better error-communication trade-off. Most previous research has investigated optimizing communication frequency or adjusting the quantization level in a highly adaptive manner, but not both. Intuitively, we ask a question, can we adaptively adjust the quantization level in the lazy aggregation fashion to simultaneously reduce transmitted amounts and communication frequency? In the paper, we intend to select the optimal quantization level for every participated device by optimizing the model deviation caused by skipping quantized gradient updates (i.e., lazy aggregation), which gives us a novel quantization criterion cooperated with a new proposed lazy aggregation strategy to reduce overall communication costs further while still offering a convergence guarantee. The contributions of this paper are trifold. • We propose an innovative FL procedure with adaptive quantization of lazily-aggregated gradients termed AQUILA, which simultaneously adjusts the communication frequency and quantization level in a synergistic fashion. • Instead of naively combining LAQ and AdaQuantFL, AQUILA owns a completely different device selection method and quantitative level calculation method. Specifically, we derive an adaptive quantization strategy from a new perspective that minimizes the model deviation introduced by lazy aggregation. Subsequently, we present a new lazy aggregation criterion that is more precise and saves more device storage. Furthermore, we provide a convergence analysis of AQUILA under the generally non-convex case and the Polyak-Łojasiewicz condition. • Except for normal FL settings, such as independent and identically distributed (IID) data environment, we experimentally evaluate the performance of AQUILA in a number of non-homogeneous FL settings, such as non-independent and non-identically distributed (Non-IID) local dataset and various heterogeneous model aggregations. The evaluation results reveal that AQUILA considerably mitigates the communication overhead compared to a variety of state-of-art algorithms. 2 BACKGROUND AND RELATED WORK Consider an FL system with one central parameter server and a device set M with M = |M| distributed devices to collaboratively train a global model parameterized by θ ∈ Rd. Each device m ∈ M has a private local dataset Dm = {(x(m)1 ,y (m) 1 ), · · · , (x (m) nm ,y (m) nm )} of nm samples. The federated training process is typically performed by solving the following optimization problem min θ∈Rd f(θ) = 1 M M∑ m=1 fm(θ) with fm(θ) = [l (hθ(x),y)](x,y)∼Dm , (1) where f : Rd → R denotes the empirical risk, fm : Rd → R denotes the local objective based on the private data Dm of the device m, l denotes the local loss function, and hθ denotes the local model. The FL training process is conducted by iteratively performing local updates and global aggregation as proposed in (McMahan et al., 2017). First, at communication round k, each device m receives the global model θk from the parameter server and trains it with its local data Dm. Subsequently, it sends the local gradient ∇fm(θk) to the central server, and the server will update the global model with learning rate α by θk+1 := θk − α M ∑ m∈M ∇fm(θk). (2) Definition 2.1 (Quantized gradient innovation). For more efficiency, each device only uploads the quantized deflection between the full gradient ∇fm(θk) and the last quantization value qk−1m utilizing a quantization operator Q : Rd → Rd, i.e., ∆qkm = Q(∇fm(θ k)− qk−1m ). (3) For communication frequency reduction, the lazy aggregation strategy allows the device m ∈ M to upload its newly-quantized gradient innovation at epoch k only when the change in local gradient is sufficiently larger than a threshold. Hence, the quantization of the local gradient qkm of device m at epoch k can be calculated by qkm := qk−1m , if ∥∥∥Q(∇fm (θk)− qk−1m )∥∥∥2 2 ⩽ Threshold qk−1m +∆q k m, otherwise . (4) If the device m skips the upload of ∆qkm, the central server will reuse the last gradient q k−1 m for aggregation. Therefore, the global aggregation rule can be changed from (2) to: θk+1 = θk − α M ∑ m∈M qkm = θ k − α M ∑ m∈Mk ( qk−1m +∆q k m ) − α M ∑ m∈Mkc qk−1m , (5) where Mk denotes the subset of devices that upload their quantized gradient innovation, and Mkc = M \ Mk denotes the subset of devices that skip the gradient update and reuse the old quantized gradient at epoch k. For AdaQuantFL, it is proposed to achieve a better error-communication trade-off by adaptively adjusting the quantization levels during the FL training process. Specifically, AdaQuantFL computes the optimal quantization level (bk)∗ by (bk)∗ = ⌊ √ f(θ0)/f(θk) · b0⌋, where f(θ0) and f(θk) are the global objective loss defined in (1). However, AdaQuantFL transmits quantized gradients at every communication round. In order to skip unnecessary communication rounds and adaptively adjust the quantization level for each communication jointly, a naive approach is to quantize lazily aggregated gradients with AdaQuantFL. Nevertheless, it fails to achieve efficient communication for several reasons. First, given the descending trend of training loss, AdaQuantFL’s criterion may lead to a high quantization bit number even exceeding 32 bits in the training process (assuming a floating point is represented by 32 bits in our case), which is too large for cases where the global convergence is already approaching and makes the quantization meaningless. Second, a higher quantization level results in a smaller quantization error, leading to a lower communication threshold in the lazy aggregation criterion (4) and thus a higher transmission frequency. Consequently, it is desirable to develop a more efficient adaptive quantization method in the lazilyaggregated setting to improve communication efficiency in FL systematically. 3 ADAPTIVE QUANTIZATION OF LAZILY-AGGREGATED GRADIENTS Given the above limitations of the naive joint use of the existing adaptive quantization criterion and lazy aggregation strategy, this paper aims to design a unifying procedure for communication efficiency optimization where the quantization level and communication frequency are considered synergistically and interactively. 3.1 OPTIMAL QUANTIZATION LEVEL First, we introduce the definition of a deterministic rounding quantizer and a fully-aggregated model. Definition 3.1. (Deterministic mid-tread quantizer.) Every element of the gradient innovation of device m at epoch k is mapped to an integer [ψkm]i as[ ψkm ] i = [ ∇fm(θk) ] i − [ qk−1m ] i +Rkm 2τkmR k m + 1 2 ,∀i ∈ {1, 2, ..., d}, (6) where ∇f(θkm) denotes the current unquantized gradient, Rkm = ∥∇fm(θ k) − qk−1m ∥∞ denotes the quantization range, bkm denotes the quantization level, and τ k m := 1/(2 bkm − 1) denotes the quantization granularity. More explanations on this quantizer are exhibited on Appendix A.2. Definition 3.2 (Fully-aggregated model). The fully-aggregated model θ̃ without lazy aggregation at epoch k is computed by θ̃ k+1 = θk − α M ∑ m∈M ( qk−1m +∆q k m ) . (7) Lemma 3.1. The influence of lazy aggregation at communication round k can be bounded by ∥∥∥θ̃k−θk∥∥∥2 2 ⩽ 4α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)−qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2+4(Rkm)2d+ d2 ) . (8) Corresponding to Lemma 3.1, since Rkm is independent of τ k m, we can formulate an optimization problem to minimize the upper bound of this model deviation caused by update skipping for each device m: minimize 0<τkm⩽1 (∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 subject to τkm = 1( 2b k m − 1 ) . (9) Solving the below optimization problem gives AQUILA an adaptive strategy (10) that selects the optimal quantization level based on the quantization range Rkm, the dimension d of the local model, the current gradient ∇fm(θk), and the last uploaded quantized gradient qk−1m : (bkm) ∗ = log2 Rkm√d∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 + 1 . (10) The superiority of (10) comes from the following three aspects. First, since Rkm ⩾ [∇fm(θ k)]i − [qk−1m ]i ⩾ −Rkm, the optimal quantization level (bkm)∗ must be greater than or equal to 1. Second, AQUILA can personalize an optimal quantization level for each device corresponding to its own gradient, whereas, in AdaQuantFL, each device merely utilizes an identical quantization level according to the global loss. Third, the gradient innovation and quantization range Rkm tend to fluctuate along with the training process instead of keeping descending, and thus prevent the quantization level from increasing tremendously compared with AdaQuantFL. 3.2 PRECISE LAZY AGGREGATION CRITERION Definition 3.3 (Quantization error). The global quantization error εk is defined by the subtraction between the current unquantized gradient ∇f(θk) and its quantized value qk−1 +∆qk, i.e., εk = ∇f(θk)− qk−1 −∆qk, (11) where ∇f(θk) = ∑ m∈M ∇fm(θ k), qk−1 = ∑ m∈M q k−1 m ,∆q k = ∑ m∈M ∆q k m. To better fit the larger quantization errors induced by fewer quantization bits in (10), AQUILA possesses a new communication criterion to avoid the potential expansion of the devices group being skipped: ∥∥∆qkm∥∥22 + ∥∥εkm∥∥22 ⩽ βα2 ∥∥∥θk − θk−1∥∥∥22 ,∀m ∈ Mkc , (12) where β ⩾ 0 is a tuning factor. Note that this skipping rule is employed at epoch k, in which each device m calculates its quantized gradient innovation ∆qkm and quantization error ε k m, then utilizes this rule to decide whether uploads ∆qkm. The comparison of AQUILA’s skip rule and LAQ’s is also shown in Appendix A.2. Instead of storing a large number of previous model parameters as LAQ, the strength of (12) is that AQUILA directly utilizes the global model for two adjacent rounds as the skip condition, which does not need to estimate the global gradient (more precise), requires fewer hyperparameters to adjust, and considerably reduces the storage pressure of local devices. This is especially important for small-capacity devices (e.g., sensors) in practical IoT scenarios. Algorithm 1 Communication Efficient FL with AQUILA Input: the number of communication rounds K, the learning rate α. Initialize: the initial global model parameter θ0. 1: Server broadcasts θ0 to all devices. ▷ For the initial round k = 0. 2: for each device m ∈ M in parallel do 3: Calculates local gradient ∇fm(θ0). 4: Compute (b0m) ∗ by setting qk−1m = 0 in (10) and the quantized gradient innovation ∆q 0 m, and transmits it back to the server side. 5: end for 6: for k = 1, 2, ...,K do 7: Server broadcasts θk to all devices. 8: for each device m ∈ M in parallel do 9: Calculates local gradient ∇fm(θk), the optimal local quantization level (bkm)∗ by (10), and the quantized gradient innovation ∆qkm. 10: if (12) does not hold for device m then ▷ If satisfies, skip uploading. 11: device m transmits ∆qkm to the server. 12: end if 13: end for 14: Server updates θk+1 by the saving previous global quantized gradient qk−1m and the received quantized gradient innovation ∆qkm: θ k+1 := θk − α ( qk−1 + 1/M ∑ m∈Mk ∆q k m ) . 15: Server saves the average quantized gradient qk for the next aggregation. 16: end for The detailed process of AQUILA is comprehensively summarized in Algorithm 1. At epoch k = 0, each device calculates b0m by setting q k−1 0 = 0 and uploads ∆q k 0 to the server since the (12) is not satisfied. At epoch k ∈ {1, 2, ...,K}, the server first broadcasts the global model θk to all devices. Each device m computes ∇f(θkm) with local training data and then utilizes it to calculate an optimal quantization level by (10). Subsequently, each device computes its gradient innovation after quantization and determines whether or not to upload based on the communication criterion (12). Finally, the server updates the new global model θk+1 with up-to-date quantized gradients qk−1m +∆q k m for those devices who transmit the uploads at epoch k, while reusing the old quantized gradients qk−1m for those who skip the uploads. 4 THEORETICAL DERIVATION AND ANALYSIS OF AQUILA As aforementioned, we bound the model deviation caused by skipping updates with respect to quantization bits. Specifically, if the communication criterion (12) holds for the device m at epoch k, it does not contribute to epoch k’s gradient. Otherwise, the loss caused by device m will be minimized with the optimal quantization level selection criterion (10). In this section, the theoretical convergence derivation of AQUILA is based on the following standard assumptions. Assumption 4.1 (L-smoothness). Each local objective function fm is Lm-smooth, i.e., there exist a constant Lm > 0, such that ∀x,y ∈ Rd, ∥∇fm(x)−∇fm(y)∥2 ⩽ Lm ∥x− y∥2 , (13) which implies that the global objective function f is L-smooth with L ≤ L̄ = 1m ∑m i=1 Lm. Assumption 4.2 (Uniform lower bound). For all x ∈ Rd, there exist f∗ ∈ R such that f(x) ≥ f∗. Lemma 4.1. Following the assumption that the function f is L-smooth, we have f(θk+1)−f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 +α ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ∥∥εk∥∥2 2 +(L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 . (14) 4.1 CONVERGENCE ANALYSIS FOR GENERALLY NON-CONVEX CASE. Theorem 4.1. Suppose Assumptions 4.1, 4.2, and B.1 (29) be satisfied. If Mkc ̸= ∅, the global objective function f satisfies f(θk+1)−f(θk)⩽−α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1−θk∥∥∥2 2 + βγ α ∥∥∥θk−θk−1∥∥∥2 2 . (15) Corollary 4.1. Let all the assumptions of Theorem 4.1 hold and L2 − 1 2α + βγ α ⩽ 0, then the AQUILA requires K = O ( 2ω1 αϵ2 ) (16) communication rounds with ω1=f ( θ1 ) −f (θ∗)+βγα ∥∥θ1−θ0∥∥2 2 to achieve mink ∥∇f(θk)∥22 ⩽ ϵ2. Compared to LAG. Corresponding to Eq.(70) in Chen et al. (2018), LAG defines a Lyapunov function Vk := f(θk)− f(θ∗) + ∑D d=1 βd∥θ k+1−d − θk−d∥22 and claims that it satisfies Vk+1 − Vk ≤ − (α 2 − c̃ (α, β1) (1 + ρ)α2 )∥∥∥∇f(θk)∥∥∥2 2 , (17) where c̃ (α, β1) = L/2 − 1/(2α) + β1, β1 = Dξ/(2αη), ξ < 1/D, and ρ > 0. The above result (17) indicates that LAG requires KLAG = O ( 2ω1 (α− 2c̃ (α, β1) (1 + ρ)α2) ϵ2 ) (18) communication rounds to converge. Since the non-negativity of the term c̃ (α, β1) (1 + ρ)α2, we can readily derive that α < α − 2c̃ (α, β1) (1 + ρ)α2, which demonstrates AQUILA achieves a better convergence rate than LAG with the appropriate selection of α. 4.2 CONVERGENCE ANALYSIS UNDER POLYAK-ŁOJASIEWICZ CONDITION. Assumption 4.3 (µ−PŁ condition). Function f satisfies the PL condition with a constant µ > 0, that is, ∥∥∥∇f(θk)∥∥∥2 2 ⩾ 2µ(f(θk)− f(θ∗)). (19) Theorem 4.2. Suppose Assumptions 4.1, 4.2, and 4.3 be satisfied and Mkc ̸= ∅, if the hyperparameters satisfy βγα ⩽ (1− αµ) ( 1 2α − L 2 ) , then the global objective function satisfies f(θk+1)−f(θk)⩽−αµ(f(θk)−f(θ∗))+ ( L 2 − 1 2α )∥∥∥θk+1−θk∥∥∥2 2 + βγ α ∥∥∥θk−θk−1∥∥∥2 2 , (20) and the AQUILA requires K = O ( − 1 log(1− αµ) log ω1 ϵ ) (21) communication round with ω1 = f(θ 1) − f(θ∗) + ( 1 2α − L 2 ) ∥∥θ1 − θ0∥∥2 2 to achieve f(θK+1) − f(θ∗) + ( 12α − L 2 )∥θ K+1 − θK∥22 ⩽ ϵ. Compared to LAG. According to Eq.(50) in Chen et al. (2018), we have that VK ≤ ( 1− αµ+ αµ √ Dξ )K V0, (22) where ξ < 1/D. Thus, we have that LAG requires KLAG = O ( − 1 log(1− αµ+ αµ √ Dξ) log ω1 ϵ ) (23) communication rounds to converge. Compared to Theorem 4.2, we can derive that log(1− αµ) < log(1− αµ+ αµ √ Dξ), which indicates that AQUILA has a faster convergence than LAG under the PŁ condition. Remark. We want to emphasize that LAQ introduces the Lyapunov function into its proof, making it extremely complicated. In addition, LAQ can only guarantee that the final objective function converges to a range of the optimal solution rather than an accurate optimum f(θ∗). Nevertheless, as discussed in Section 3.2, we utilize the precise model difference in AQUILA as a surrogate for the global gradient and thus simplify the proof. 5 EXPERIMENTS AND DISCUSSION 5.1 EXPERIMENT SETUP In this paper, we evaluate AQUILA on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and WikiText-2 dataset (Merity et al., 2016), considering IID, Non-IID data scenario, and heterogeneous model architecture (which is also a crucial challenge in FL) simultaneously. The FL environment is simulated in Python 3.9 with PyTorch 11.1 (Paszke et al., 2019) implementation. For the diversity of the neural network structures, we train ResNet-18 (He et al., 2016) at CIFAR-10 dataset, MobileNet-v2 (Sandler et al., 2018) at CIFAR-100 dataset, and Transformer (Vaswani et al., 2017) at WikiText-2 dataset. As for the FL system setting, in the majority of our experiments, the whole system exists M = 10 total devices. However, considering the large-scale feature of FL, we also validate AQUILA on a larger system of M = 100/80 total devices for CIFAR / WikiText-2 dataset. The hyperparameters and additional details of our experiments are revealed in Appendix A.3. 5.2 HOMOGENEOUS ENVIRONMENT We first evaluate AQUILA with homogeneous settings where all local models share the same model architecture as the global model. To better demonstrate the effectiveness of AQUILA, its performance is compared with several state-of-the-art methods, including AdaQuantFL, LAQ with fixed levels, LENA (Ghadikolaei et al., 2021), MARINA (Gorbunov et al., 2021), and the naive combination of AdaQuantFL with LAQ. Note that based on this homogeneous setting, we conduct both IID and NonIID evaluations on CIFAR-10 and CIFAR-100 dataset, and an IID evaluation on WikiText-2. To simulate the Non-IID FL setting as (Diao et al., 2020), each device is allocated two classes of data in CIFAR-10 and 10 classes of data in CIFAR-100 at most, and the amount of data for each label is balanced. The experimental results are presented in Fig. 1, where 100% implies all local models share a similar structure with the global model (i.e., homogeneity), 100% (80 devices) denotes the experiment is conducted in an 80 devices system, and LAdaQ represents the naive combination of AdaQuantFL and LAQ. For better illustration, the results have been smoothed by their standard deviation. The solid lines represent values after smoothing, and transparent shades of the same colors around them represent the true values. Additionally, Table 2 shows the total number of bits transmitted by all devices throughout the FL training process. The comprehensive experimental results are established in Appendix A.4. 5.3 NON-HOMOGENEOUS SCENARIO In this section, we also evaluate AQUILA with heterogeneous model structures as HeteroFL (Diao et al., 2020), where the structures of local models trained on the device side are heterogeneous. Suppose the global model at epoch k is θk and its size is d = wg ∗ hg, then the local model of each device m can be selected by θkm = θ k [: wm, : hm], where wm = rmwg and hm = rmhg, respectively. In this paper, we choose model complexity levels rm = 0.5. (f)(e)(d) (b)(a) (c) Most of the symbols in Fig. 2 are identical to the Fig. 1. 100%-50% is a newly introduced symbol that implies half of the devices share the same structure with the global model while another half only have 50% * 50% parameters as the global model. Performance Analysis. First of all, AQUILA achieves a significant transmission reduction compared to the naive combination of LAQ and AdaQuantFL in all datasets, which demonstrates the superiority of AQUILA’s efficiency. Specifically, Table 2 indicates that AQUILA saves 57.49% of transmitted bits in the system of 80 devices at the WikiText-2 dataset and reduces 23.08% of transmitted bits in the system of 100 devices at the CIFAR-100 dataset, compared to the naive combination. And other results in Table 3 also show an obvious reduction in terms of the total transmitted bits required for convergence. Second, in Fig. 1 and Fig. 2, the changing trend of AQUILA’s communication bits per each round clearly verifies the necessity and effectiveness of our well-designed adaptive quantization level and skip criterion. In these two figures, the number of bits transmitted in each round of AQUILA fluctuates a bit, indicating the effectiveness of AQUILA’s selection rule. Meanwhile, the value of transmitted bits remains at quite a low level, suggesting that the adaptive quantization principle makes training more efficient. Moreover, the figures also inform that the quantization level selected by AQUILA will not continuously increase during training instead of being as AdaQuantFL. In addition, based on these two figures, we can also conclude that AQUILA converges faster under the same communication costs. Finally, AQUILA is capable of adapting to a wide range of challenging FL circumstances. In the Non-IID scenario and heterogeneous model structure, AQUILA still outperforms other algorithms by significantly reducing overall transmitted bits while maintaining the same convergence property and objective function value. In particular, AQUILA reduces 60.4% overall communication costs compared to LENA and 57.2% compared to MARINA on average. These experimental results in non-homogeneous FL settings prove that AQUILA can be stably employed in more general and complicated FL scenarios. 5.4 ABLATION STUDY ON THE IMPACT OF TUNING FACTOR β One key contribution of AQUILA is presenting a new lazy aggregation criterion (12) to reduce communication frequency. In this part, we evaluate the effects of the loss performance of different tuning factor β value in Fig. 3. As β grows within a certain range, the convergence speed of the model will slow down (due to lazy aggregation). Still, it will eventually converge to the same model performance while considerably reducing the communication overhead. Nevertheless, increasing the value of β will lead to a decrease in the final model performance since it skips so many essential uploads that make the training deficient. The accuracy (perplexity) comparison of AQUILA with various selections of the tuning factor β is shown in Fig. 10, which indicates the same trend.To sum up, we should choose the value of factor β to maintain the model’s performance and minimize the total transmitted amount of bits. Specifically, we select the value of β = 0.1, 0.25, 1.25 for CIFAR-10, CIFAR-100, and WikiText-2 datasets for our evaluation, respectively. (f)(e)(d) (b)(a) (c) 6 CONCLUSIONS AND FUTURE WORK This paper proposes a communication-efficient FL procedure to simultaneously adjust two mutuallydependent degrees of freedom: communication frequency and quantization level. With the close cooperation of the novel adaptive quantization and adjusted lazy aggregation strategy derived in this paper, the proposed AQUILA has been proven to be capable of reducing the transmitted costs while maintaining the convergence guarantee and model performance compared to existing methods. The evaluation with Non-IID data distribution and various heterogeneous model architectures demonstrates that AQUILA is compatible in a non-homogeneous FL environment. REPRODUCIBILITY We present the overall theorem statements and proofs for our main results in the Appendix, as well as necessary experimental plotting figures. Furthermore, we submit the code of AQUILA in the supplementary material part, including all the hyperparameters and a requirements to help the public reproduce our experimental results. Our algorithm is straightforward, well-described, and easy to implement. ETHICS STATEMENT All evaluations of AQUILA are performed on publicly available datasets for reproducibility purposes. This paper empirically studies the performance of various state-of-art algorithms, therefore, probably introduces no new ethical or cultural problems. This paper does not utilize any new dataset. A APPENDIX The appendix includes supplementary experimental results, mathematical proof of the aforementioned theorems, and a detailed derivation of the novel adaptive quantization criterion and lazy aggregation strategy. Compared to Fig. 1 and Fig. 2 in the main text, the result figures in the appendix show a more comprehensive evaluation with AQUILA, which contains more detailed information including but not limited to accuracy vs steps and training loss vs steps curves. A.1 OVERALL FRAMEWORK OF AQUILA The cooperation of the novel adaptive quantization criterion (10) and lazy aggregation strategy (12) is illustrated in Fig. 4a. Compared to the naive combination of AdaQuantFL and LAQ, where the mutual influence between adaptive quantization and lazy aggregation has not been considered, as shown in Fig. 4b, AQUILA adaptively optimizes the allocation of quantization bits throughout training to promote the convergence of lazy aggregation, and at the same time utilizes the lazy aggregation strategy to improve the efficiency of adaptive quantization by compress the transmission with a lower quantization level. A.2 EXPLANATION OF THE QUANTIZER AND THE SKIP RULE OF LAQ’S The quantizer (6) is a deterministic quantizer that, at each dimension, maps the gradient innovation to the closest point at a one-dimensional grid. The range of the grid is Rkm, and the granularity is determined by quantization level τkm. Each dimension of gradient innovation is mapped to an integer in {0, 1, 2, 3, . . . , 2b − 1}. More precisely, the 1/2 ensures mapping to the closest integer instead of flooring to a smaller integer. The Rkm in the numerator ensures that the mapped integer is non-negative. As a result, when the gradient innovation is transmitted to the central server, 32 bits are used for the range, and b ∗ d bits are used for the mapped integer. Thus, 32+ b ∗ d bits are transmitted in total. The difference between (6) and (32) (Lemma B.2) is that (6) encodes the raw gradient innovation vector to an integer vector, whilst (32) decodes the integer vector to a quantized gradient innovation vector. Specifically, in the training process, each client utilizes (6) to encode the gradient innovation to an integer at each dimension, and afterwards, the integer vector ψkm and τ k m are sent to a central server. After receiving them, the central server can decode the quantized gradient innovation as (32) states. The skip rule of LAQ is measured by the summation of the accumulated model difference and quantization error: ∥∆qkm∥22 ⩽ 1 α2M2 D∑ d′=1 ξd′ ∥∥∥θk+1−d′ − θk−d′∥∥∥2 2 + 3 (∥∥εkm∥∥22 + ∥∥∥ε̂k−1m ∥∥∥22 ) , (24) where ξd′ is a series of manually selected scalars and D is also predetermined. εkm is the quantization error of client m at epoch k, and ε̂k−1m is the quantization of client m at last time it uploads its gradient innovation. Please refer to Sun et al. (2020) for more details on (24). In order to compute the LAQ skip threshold, each client has to store enormous previous information. The difference of AQUILA skipping criterion and LAQ skipping criterion is as follows. First, the AQUILA threshold is easier to compute for a local client. Compared to the LAQ skipping criterion, AQUILA skipping criterion is more concise and thus requires less storage and computing power. Second, the AQUILA criterion is easier to tune because much fewer hyperparameters are introduced. Compared to the LAQ criterion in which α, D and {ξd′}Dd′=1 are all manually selected, whilst only two hyperparameters α and β are introduced in the AQUILA criterion. Third, with the given threshold, AQUILA has a good theoretical property. The theoretical analysis of AQUILA is easier to follow with no Lyapunov function introduced as in LAQ. And the result also shows that AQUILA can achieve a better convergence rate under the non-convex case and the PL condition. A.3 EXPERIMENT SETUP In this section, we provide some extra hyperparameter settings for our evaluation. For the LAQ, we set D = 10 and ξ1 = ξ2 = · · · = ξD = 0.8/D as the same as the setting in their paper. For LENA, we set βLENA = 40 in their trigger condition. And for MARINA, we calculate the uploading probability of Bernoulli distribution as p = ξQ/d as announced in their paper. In addition, we choose the CrossEntropy function as our objective function in the experiment part. Table 1 shows the hyperparameter details of our evaluation. A.4 COMPREHENSIVE EXPERIMENT RESULTS This section will cover all the experimental results in our paper. B BASIC FACTS AND SOME LEMMAS Notations: Bold fonts denote vectors (e.g., θ). Normal fonts denote scalars (e.g., α). Subscript m is used to describe functions about a local device m (e.g., fm(θ)). A function without a subscript is used to describe an average among all devices (e.g., f(θ)). Frequently used norm inequalities Suppose n ∈ N+ and ∥ · ∥2 denotes the ℓ2−norm. For p in R+,xi,a, b ∈ Rd, there holds 1. Norm summation inequality. ∥∥∥∥∥ n∑ i=1 xi ∥∥∥∥∥ 2 2 ⩽ n n∑ i=1 ∥xi∥22 . (25) 2. Inner-product inequality. ⟨a, b⟩ = 1 2 ( ∥a∥22 + ∥b∥22 − ∥a− b∥22 ) . (26) 3. Young’s Inequality. ∥a+ b∥22 ⩽ (1 + p) ∥a∥ 2 2 + (1 + p −1) ∥b∥22 . (27) 4. Minkowski’s Inequality. ∥a+ b∥2 ⩽ ∥a∥2 + ∥b∥2 . (28) Assumption B.1. All devices’ quantization errors εk will be constrained by the total error of the omitted devices., i.e., ∀ k = 0, 1, · · · ,K, if Mkc ̸= ∅, ∃ γ ⩾ 1, such that ∥∥εk∥∥2 2 = ∥∥∥∥∥ 1M ∑ m∈M εkm ∥∥∥∥∥ 2 2 ⩽ γ M2 ∥∥∥∥∥∥ ∑ m∈Mkc εkm ∥∥∥∥∥∥ 2 2 , (29) where K denotes the termination time, and εkm = ∇fm(θ k) − ( qk−1m +∆q k m ) . This lemma is easy to verify when Mkc ̸= ∅, a bounded variable (here is εk) will always be bounded by a part of itself ( 1M ∑ m∈Mkc εkm) multiplied by a real number (γ). Note that there is another nontrivial scenario that Mkc ̸= ∅ but εkm = 0 for all m ∈ Mkc , which implies that γ = 0 or not exists and conflicts with our assumption. However, this situation only happens when all entries of εkm = 0, i.e., [∇fm(θk)]i = [qk−1m ]i for all 0 ⩽ i ⩽ d. Lemma B.1. The summation of quantized gradient innovation and quantization error is bounded by the global model difference: ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ∥∥εk∥∥2 2 ⩽ βγ α2 ∥∥∥θk − θk−1∥∥∥2 2 , (30) Proof. ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ∥∥εk∥∥2 2 (a) ⩽ ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + γ ∥∥∥∥∥∥ 1M ∑ m∈Mkc εkm ∥∥∥∥∥∥ 2 2 (25) ⩽ |Mkc | ∑ m∈Mkc ∥∥∥∥ 1M∆qkm ∥∥∥∥2 2 + γ|Mkc | ∑ m∈Mkc ∥∥∥∥ 1M εkm ∥∥∥∥2 2 = |Mkc | M2 ∑ m∈Mkc (∥∥∆qkm∥∥22 + γ ∥∥εkm∥∥22) (b) ⩽ |Mkc | M2 ∑ m∈Mkc ( γ ∥∥∆qkm∥∥22 + γ ∥∥εkm∥∥22) (c) ⩽ βγ|Mkc |2 α2M2 ∥∥∥θk − θk−1∥∥∥2 2 ⩽ βγ α2 ∥∥∥θk − θk−1∥∥∥2 2 , (31) where (a) follows Assumption B.1, (b) follows γ is larger than 1 by definition, and (c) utilizes our novel trigger condition (12). Lemma B.2. From Definition 3.1, we can derive that the relationship between quantized gradient innovation ∆qkm and its quantization representation ψ k m which applies b k m bits for each dimension: ∆qkm = 2τ k mR k mψ k m −Rkm1, (32) where 1 ∈ Rd denotes a vector filled with scalar value 1. Remark:We can utilize (32) to calculate the quantized gradient innovation in the experimental implementation. C MISSING PROOF OF LEMMA 3.1 AND THE DERIVATION OF bkm With lazy aggregation, the actual aggregated model at epoch k is: θk+1 = θk − α M ∑ m∈Mk ( qk−1m +∆q k m ) − α M ∑ m∈Mkc qk−1m . (33) Suppose ∆km denotes the quantization loss of device m at epoch k and ψ k m denotes the quantization representation of local gradient innovation as in Definition 3.1, i.e., ∆km = ψ k m − ∇fm ( θk ) − qk−1m +Rkm1 2τkmR k m − 1 2 1 (34) With (7), (33), and (34), the model deviation ∥θ̃ k − θk∥22 caused by skipping gradients can be written as: ∥∥∥θ̃k − θk∥∥∥2 2 = ∥∥∥∥∥∥ αM ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 = ∥∥∥∥∥∥ αM ∑ m∈Mkc ( 2τkmR k mψ k m −Rkm1 )∥∥∥∥∥∥ 2 2 (25) ⩽ α2|Mkc | M2 ∑ m∈Mkc ∥∥∥2τkmRkmψkm −Rkm1∥∥∥2 2 (34) ⩽ α2|Mkc | M2 ∑ m∈Mkc (∥∥∥∇fm(θk)− qk−1m +Rkm1+ τkmRkm1+∆km −Rkm1∥∥∥2 2 ) (34) ⩽ 2α2|Mkc | M2 ∑ m∈Mkc (∥∥∥∇fm(θk)− qk−1m + τkmRkm1∥∥∥2 2 + ∥∥∆km∥∥22) (a) ⩽ 2α2|Mkc | M2 ∑ m∈Mkc (∥∥∥∇fm(θk)− qk−1m + τkmRkm1∥∥∥2 2 + d ) (28) ⩽ 2α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 + ∥∥τkmRkm1∥∥2)2 + d) = 2α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2 + 2 ∥∥τkmRkm1∥∥2)2 + d) ⩽ 4α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 + 4 ∥∥τkmRkm1∥∥22 + d2 ) (b) ⩽ 4α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 + 4(Rkm)2d+ d2 ) , (35) where 1 ∈ Rd denotes the vector filled with scalar value 1, (a) ∆km ∈ (−1, 0], (b) Rkm ⩾ τkmRkm ⩾ 0. Since Rkm is independent of τ k m, we can formulate an optimization problem about τ k m for device m at communication round k as follows: min 0<τkm⩽1 (∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 (36) Therefore, the optimal solution of τkm in (36) is (τkm) ∗ = ∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 Rkm √ d . (37) Then, the optimal adaptive quantization level (bkm) ∗ is equal to (bkm) ∗ = ⌊ log2( 1 (τkm) ∗ + 1) ⌋ = log2 Rkm√d∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 + 1 (38) Notice that (bkm) ∗ ⩾ 1 is always true since (τkm) ∗ ⩽ 1 D MISSING PROOF OF LEMMA 4.1, THEOREM 4.1 AND COROLLARY 4.1. Proof. Suppose Assumptions 4.1, 4.2, and 4.3 are satisfied and Mkc ̸= ∅. For the simplicity of the convergence proof, we assume Φk = 1M ∑ m∈Mkc ∆qkm. First, we prove Lemma 4.1. f(θk+1)− f(θk) ⩽ 〈 ∇f(θk),θk+1 − θk 〉 + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 = 〈 ∇f(θk),−α ( ∇f(θk)− εk − Φk )〉 + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 =− α ∥∥∥∇f(θk)∥∥∥2 2 + α 〈 ∇f(θk), εk +Φk 〉 + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 (26) = − α ∥∥∥∇f(θk)∥∥∥2 2 + α 2 (∥∥∥∇f(θk)∥∥∥2 2 + ∥∥εk +Φk∥∥2 2 − 1 α2 ∥∥∥θk+1 − θk∥∥∥2 2 ) + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 ⩽− α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α 2 ∥∥εk +Φk∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 (25) ⩽ − α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α ∥∥εk∥∥2 2 + α ∥∥Φk∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 . (39) Hence, we have f(θk+1)− f(θk) (30) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 , (40) which gives us Theorem 4.1. Sum it up for k = 1, 2, · · · ,K, we have f(θK+1)− f(θ1) ⩽− α 2 K∑ k=1 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θK+1 − θK∥∥∥2 2 + K−1∑ k=1 ( L 2 − 1 2α + βγ α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥θ1 − θ0∥∥2 2 . (41) Notice that inequality (41) holds for both Mkc ̸= ∅ and Mkc = ∅. Therefore, for ( L 2 − 1 2α + βγ α ) ⩽ 0 and all hyperparameters are chosen properly, considering the minimum of ∥∇f(θk)∥22 min k=1,2,··· ,K ∥∥∥∇f(θk)∥∥∥2 2 ⩽ 1 K K∑ k=1 ∥∥∥∇f(θk)∥∥∥2 2 (41) ⩽ 2 αK ( f(θ1)− f(θK) + βγ α ∥∥θ1 − θ0∥∥2 2 ) . (42) For ( L 2 − 1 2α + βγ α ) ⩽ 0 and all hyperparameters are chosen properly, we have that min k=1,2,··· ,K ∥∥∥∇f(θk)∥∥∥2 2 ⩽ 2 αK ( f(θ1)− f(θ∗) + βγ α ∥∥θ1 − θ0∥∥2 2 ) ⩽ ϵ2, (43) which demonstrates AQUILA requires K = O ( 2ω1 αϵ2 ) communication round with ω1 = f(θ1)− f(θ∗) + βγα ∥∥θ1 − θ0∥∥2 2 to achieve mink=1,2,··· ,K ∥∥∥∇f(θk)∥∥∥2 2 ⩽ ϵ2. E MISSING PROOF OF COROLLARY 4.1 WHEN Mkc = ∅. Proof. Since the skipping subset of devices are the empty set, from (5), we have θk+1 − θk = − α M ∑ m∈Mk ( qk−1m +∆q k m ) − α M ∑ m∈Mkc qk−1m =− α M ∑ m∈M ( qk−1m +∆q k m ) (11) = − α M ∑ m∈M ( ∇fm(θk)− εkm ) =− α ( ∇f(θk)− εk ) . (44) From (14) we have: f(θk+1)− f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + α ∥∥εk∥∥2 2 ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + α ∥∥εk∥∥2 2 (27) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α2 ( L 2 − 1 2α )( (1 + p) ∥∥∥∇f(θk)∥∥∥2 2 + ( 1 + p−1 ) ∥∥εk∥∥2 2 ) + α ∥∥εk∥∥2 2 = −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + 1 2 ( α2L− α ) (1 + p) ∥∥∥∇f(θk)∥∥∥2 2 + 1 2 ( α2L− α ) ( 1 + p−1 ) ∥∥εk∥∥2 2 + α ∥∥εk∥∥2 2 = α 2 ((αL− 1) (1 + p)− 1) ∥∥∥∇f(θk)∥∥∥2 2 + α 2 ( (αL− 1) ( 1 + p−1 ) + 2 ) ∥∥εk∥∥2 2 . (45) If the factor of ∥∥εk∥∥2 2 in (45) is less than or equal to 0, that is, (αL− 1) ( 1 + p−1 ) + 2 ⩽ 0, (46) then the factor of ∥∇f(θk)∥22 will be less than −α2 , which indicates that f(θk+1)− f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 . (47) Note that it is not difficult to demonstrate that (46) and L2 − 1 2α + βγ α ⩽ 0 can actually be satisfied at the same time. For instance, we can set p = 0.1, α = 0.1, β = 0.25, γ = 2, L = 2.5 that satisfies both of them. F MISSING PROOF OF THEOREM 4.2. Proof. Based on the intermediate result (40) of Theorem 4.1 and Assumption 4.4 (µ−PŁ condition), we have f(θk+1)− f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 (19) ⩽ −αµ(f(θk)− f(θ∗)) + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 , (48) which is equivalent to f(θk+1)− f(θ∗) (19) ⩽ (1− αµ)(f(θk)− f(θ∗)) + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 . (49) Suppose βγα ⩽ (1− αµ) ( 1 2α − L 2 ) , we can show that f(θk+1)− f(θ∗) + ( 1 2α − L 2 )∥∥∥θk+1 − θk∥∥∥2 2 ⩽ (1− αµ) ( f(θk)− f(θ∗) + ( 1 2α − L 2 )∥∥∥θk − θk−1∥∥∥2 2 ) . (50) Therefore, after multiply k = 1, 2, · · · ,K, we have f(θK+1)− f(θ∗) + ( 1 2α − L 2 )∥∥∥θK+1 − θK∥∥∥2 2 ⩽ (1− αµ)K ( f(θ1)− f(θ∗) + ( 1 2α − L 2 )∥∥θ1 − θ0∥∥2 2 ) ⩽ ϵ, (51) which demonstrates AQUILA requires K = O ( − 1log(1−αµ) log ω1 ϵ ) communication round with ω1 = f(θ 1)− f(θ∗) + ( 1 2α − L 2 ) ∥θ1 − θ0∥22 to achieve f(θ K+1)− f(θ∗) + ( 1 2α − L 2 ) ∥θK+1 − θK∥22 ⩽ ϵ.
1. What is the focus of the paper regarding federated learning? 2. What are the strengths of the proposed approach, particularly in terms of combining existing methods? 3. Do you have any concerns or questions about the methodology, especially regarding the local objective functions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a communication-efficient federated learning strategy, AQUILA, that combines lazily-aggregated quantization (LAQ) and adaptive quantization (AdaQuantFL). This strategy balances the communication frequency for each parameter and the quantization level adaptively. Experimental results show that AQUILA outperforms the baselines (including LAQ and AdaQuantFL individually) on CIFAR-10, CIFAR-100, and WikiText datasets. Strengths And Weaknesses The paper is well-written. Overall, I enjoyed reading the paper. Even though the main ideas and the approaches in the proofs are mainly borrowed from existing works, I think the authors did a good job in combining the methods. When averaging the local objective functions in (1), why isn't the local function weighted by the number of local samples n m ? Page 4, first line: quantification -> quantization. Clarity, Quality, Novelty And Reproducibility The paper is well written. The authors share experimental details and the code to reproduce the results.
ICLR
Title AQUILA: Communication Efficient Federated Learning with Adaptive Quantization of Lazily-Aggregated Gradients Abstract The development and deployment of federated learning (FL) have been bottlenecked by heavy communication overheads of high-dimensional models between the distributed device nodes and the central server. To achieve better errorcommunication trade-offs, recent efforts have been made to either adaptively reduce the communication frequency by skipping unimportant updates, e.g., lazy aggregation, or adjust the quantization bits for each communication. In this paper, we propose a unifying communication efficient framework for FL based on adaptive quantization of lazily-aggregated gradients (AQUILA), which adaptively balances two mutually-dependent factors, the communication frequency, and the quantization level. Specifically, we start with a careful investigation of the classical lazy aggregation scheme and formulate AQUILA as an optimization problem where the optimal quantization level is selected by minimizing the model deviation caused by update skipping. Furthermore, we devise a new lazy aggregation strategy to better fit the novel quantization criterion and retain the communication frequency at an appropriate level. The effectiveness and convergence of the proposed AQUILA framework are theoretically verified. The experimental results demonstrate that AQUILA can reduce around 60% of overall transmitted bits compared to existing methods while achieving identical model performance in a number of non-homogeneous FL scenarios, including Non-IID data and heterogeneous model architecture. 1 INTRODUCTION With the deployment of ubiquitous sensing and computing devices, the Internet of things (IoT), as well as many other distributed systems, have gradually grown from concept to reality, bringing dramatic convenience to people’s daily life (Du et al., 2020; Liu et al., 2020; Hard et al., 2018). To fully utilize such distributed computing resources, distributed learning provides a promising framework that can achieve comparable performance with the traditional centralized learning scheme. However, the privacy and security of sensitive data during the updating and transmission processes in distributed learning have been a growing concern. In this context, federated learning (FL) (McMahan et al., 2017) has been developed, allowing distributed devices to collaboratively learn a global model without privacy leakage by keeping private data isolated and masking transmitted information with secure approaches. On account of its privacy-preserving property and great potentiality in some distributed but privacy-sensitive fields such as finance and health, FL has attracted tremendous attention from both academia and industry in recent years. Unfortunately, in many FL applications, such as image classification and objective recognition, the trained model tends to be high-dimensional, resulting in significant communication costs. Hence, communication efficiency has become one of the key bottlenecks of FL. To this end, Sun et al. (2020) proposes the lazily-aggregated quantization (LAQ) method to skip unnecessary parameter uploads by estimating the value of gradient innovation — the difference between the current unquantized gradient and the previously quantized gradient. Moreover, Mao et al. (2021) devises an adaptive quantized gradient (AQG) strategy based on LAQ to dynamically select the quantization level within some artificially given numbers during the training process. Nevertheless, the AQG is still not sufficiently adaptive because the pre-determined quantization levels are difficult to choose in complicated FL environments. In another separate line of work, Jhunjhunwala et al. (2021) introduces an adaptive quantization rule for FL (AdaQuantFL), which searches in a given range for an optimal quantization level and achieves a better error-communication trade-off. Most previous research has investigated optimizing communication frequency or adjusting the quantization level in a highly adaptive manner, but not both. Intuitively, we ask a question, can we adaptively adjust the quantization level in the lazy aggregation fashion to simultaneously reduce transmitted amounts and communication frequency? In the paper, we intend to select the optimal quantization level for every participated device by optimizing the model deviation caused by skipping quantized gradient updates (i.e., lazy aggregation), which gives us a novel quantization criterion cooperated with a new proposed lazy aggregation strategy to reduce overall communication costs further while still offering a convergence guarantee. The contributions of this paper are trifold. • We propose an innovative FL procedure with adaptive quantization of lazily-aggregated gradients termed AQUILA, which simultaneously adjusts the communication frequency and quantization level in a synergistic fashion. • Instead of naively combining LAQ and AdaQuantFL, AQUILA owns a completely different device selection method and quantitative level calculation method. Specifically, we derive an adaptive quantization strategy from a new perspective that minimizes the model deviation introduced by lazy aggregation. Subsequently, we present a new lazy aggregation criterion that is more precise and saves more device storage. Furthermore, we provide a convergence analysis of AQUILA under the generally non-convex case and the Polyak-Łojasiewicz condition. • Except for normal FL settings, such as independent and identically distributed (IID) data environment, we experimentally evaluate the performance of AQUILA in a number of non-homogeneous FL settings, such as non-independent and non-identically distributed (Non-IID) local dataset and various heterogeneous model aggregations. The evaluation results reveal that AQUILA considerably mitigates the communication overhead compared to a variety of state-of-art algorithms. 2 BACKGROUND AND RELATED WORK Consider an FL system with one central parameter server and a device set M with M = |M| distributed devices to collaboratively train a global model parameterized by θ ∈ Rd. Each device m ∈ M has a private local dataset Dm = {(x(m)1 ,y (m) 1 ), · · · , (x (m) nm ,y (m) nm )} of nm samples. The federated training process is typically performed by solving the following optimization problem min θ∈Rd f(θ) = 1 M M∑ m=1 fm(θ) with fm(θ) = [l (hθ(x),y)](x,y)∼Dm , (1) where f : Rd → R denotes the empirical risk, fm : Rd → R denotes the local objective based on the private data Dm of the device m, l denotes the local loss function, and hθ denotes the local model. The FL training process is conducted by iteratively performing local updates and global aggregation as proposed in (McMahan et al., 2017). First, at communication round k, each device m receives the global model θk from the parameter server and trains it with its local data Dm. Subsequently, it sends the local gradient ∇fm(θk) to the central server, and the server will update the global model with learning rate α by θk+1 := θk − α M ∑ m∈M ∇fm(θk). (2) Definition 2.1 (Quantized gradient innovation). For more efficiency, each device only uploads the quantized deflection between the full gradient ∇fm(θk) and the last quantization value qk−1m utilizing a quantization operator Q : Rd → Rd, i.e., ∆qkm = Q(∇fm(θ k)− qk−1m ). (3) For communication frequency reduction, the lazy aggregation strategy allows the device m ∈ M to upload its newly-quantized gradient innovation at epoch k only when the change in local gradient is sufficiently larger than a threshold. Hence, the quantization of the local gradient qkm of device m at epoch k can be calculated by qkm := qk−1m , if ∥∥∥Q(∇fm (θk)− qk−1m )∥∥∥2 2 ⩽ Threshold qk−1m +∆q k m, otherwise . (4) If the device m skips the upload of ∆qkm, the central server will reuse the last gradient q k−1 m for aggregation. Therefore, the global aggregation rule can be changed from (2) to: θk+1 = θk − α M ∑ m∈M qkm = θ k − α M ∑ m∈Mk ( qk−1m +∆q k m ) − α M ∑ m∈Mkc qk−1m , (5) where Mk denotes the subset of devices that upload their quantized gradient innovation, and Mkc = M \ Mk denotes the subset of devices that skip the gradient update and reuse the old quantized gradient at epoch k. For AdaQuantFL, it is proposed to achieve a better error-communication trade-off by adaptively adjusting the quantization levels during the FL training process. Specifically, AdaQuantFL computes the optimal quantization level (bk)∗ by (bk)∗ = ⌊ √ f(θ0)/f(θk) · b0⌋, where f(θ0) and f(θk) are the global objective loss defined in (1). However, AdaQuantFL transmits quantized gradients at every communication round. In order to skip unnecessary communication rounds and adaptively adjust the quantization level for each communication jointly, a naive approach is to quantize lazily aggregated gradients with AdaQuantFL. Nevertheless, it fails to achieve efficient communication for several reasons. First, given the descending trend of training loss, AdaQuantFL’s criterion may lead to a high quantization bit number even exceeding 32 bits in the training process (assuming a floating point is represented by 32 bits in our case), which is too large for cases where the global convergence is already approaching and makes the quantization meaningless. Second, a higher quantization level results in a smaller quantization error, leading to a lower communication threshold in the lazy aggregation criterion (4) and thus a higher transmission frequency. Consequently, it is desirable to develop a more efficient adaptive quantization method in the lazilyaggregated setting to improve communication efficiency in FL systematically. 3 ADAPTIVE QUANTIZATION OF LAZILY-AGGREGATED GRADIENTS Given the above limitations of the naive joint use of the existing adaptive quantization criterion and lazy aggregation strategy, this paper aims to design a unifying procedure for communication efficiency optimization where the quantization level and communication frequency are considered synergistically and interactively. 3.1 OPTIMAL QUANTIZATION LEVEL First, we introduce the definition of a deterministic rounding quantizer and a fully-aggregated model. Definition 3.1. (Deterministic mid-tread quantizer.) Every element of the gradient innovation of device m at epoch k is mapped to an integer [ψkm]i as[ ψkm ] i = [ ∇fm(θk) ] i − [ qk−1m ] i +Rkm 2τkmR k m + 1 2 ,∀i ∈ {1, 2, ..., d}, (6) where ∇f(θkm) denotes the current unquantized gradient, Rkm = ∥∇fm(θ k) − qk−1m ∥∞ denotes the quantization range, bkm denotes the quantization level, and τ k m := 1/(2 bkm − 1) denotes the quantization granularity. More explanations on this quantizer are exhibited on Appendix A.2. Definition 3.2 (Fully-aggregated model). The fully-aggregated model θ̃ without lazy aggregation at epoch k is computed by θ̃ k+1 = θk − α M ∑ m∈M ( qk−1m +∆q k m ) . (7) Lemma 3.1. The influence of lazy aggregation at communication round k can be bounded by ∥∥∥θ̃k−θk∥∥∥2 2 ⩽ 4α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)−qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2+4(Rkm)2d+ d2 ) . (8) Corresponding to Lemma 3.1, since Rkm is independent of τ k m, we can formulate an optimization problem to minimize the upper bound of this model deviation caused by update skipping for each device m: minimize 0<τkm⩽1 (∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 subject to τkm = 1( 2b k m − 1 ) . (9) Solving the below optimization problem gives AQUILA an adaptive strategy (10) that selects the optimal quantization level based on the quantization range Rkm, the dimension d of the local model, the current gradient ∇fm(θk), and the last uploaded quantized gradient qk−1m : (bkm) ∗ = log2 Rkm√d∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 + 1 . (10) The superiority of (10) comes from the following three aspects. First, since Rkm ⩾ [∇fm(θ k)]i − [qk−1m ]i ⩾ −Rkm, the optimal quantization level (bkm)∗ must be greater than or equal to 1. Second, AQUILA can personalize an optimal quantization level for each device corresponding to its own gradient, whereas, in AdaQuantFL, each device merely utilizes an identical quantization level according to the global loss. Third, the gradient innovation and quantization range Rkm tend to fluctuate along with the training process instead of keeping descending, and thus prevent the quantization level from increasing tremendously compared with AdaQuantFL. 3.2 PRECISE LAZY AGGREGATION CRITERION Definition 3.3 (Quantization error). The global quantization error εk is defined by the subtraction between the current unquantized gradient ∇f(θk) and its quantized value qk−1 +∆qk, i.e., εk = ∇f(θk)− qk−1 −∆qk, (11) where ∇f(θk) = ∑ m∈M ∇fm(θ k), qk−1 = ∑ m∈M q k−1 m ,∆q k = ∑ m∈M ∆q k m. To better fit the larger quantization errors induced by fewer quantization bits in (10), AQUILA possesses a new communication criterion to avoid the potential expansion of the devices group being skipped: ∥∥∆qkm∥∥22 + ∥∥εkm∥∥22 ⩽ βα2 ∥∥∥θk − θk−1∥∥∥22 ,∀m ∈ Mkc , (12) where β ⩾ 0 is a tuning factor. Note that this skipping rule is employed at epoch k, in which each device m calculates its quantized gradient innovation ∆qkm and quantization error ε k m, then utilizes this rule to decide whether uploads ∆qkm. The comparison of AQUILA’s skip rule and LAQ’s is also shown in Appendix A.2. Instead of storing a large number of previous model parameters as LAQ, the strength of (12) is that AQUILA directly utilizes the global model for two adjacent rounds as the skip condition, which does not need to estimate the global gradient (more precise), requires fewer hyperparameters to adjust, and considerably reduces the storage pressure of local devices. This is especially important for small-capacity devices (e.g., sensors) in practical IoT scenarios. Algorithm 1 Communication Efficient FL with AQUILA Input: the number of communication rounds K, the learning rate α. Initialize: the initial global model parameter θ0. 1: Server broadcasts θ0 to all devices. ▷ For the initial round k = 0. 2: for each device m ∈ M in parallel do 3: Calculates local gradient ∇fm(θ0). 4: Compute (b0m) ∗ by setting qk−1m = 0 in (10) and the quantized gradient innovation ∆q 0 m, and transmits it back to the server side. 5: end for 6: for k = 1, 2, ...,K do 7: Server broadcasts θk to all devices. 8: for each device m ∈ M in parallel do 9: Calculates local gradient ∇fm(θk), the optimal local quantization level (bkm)∗ by (10), and the quantized gradient innovation ∆qkm. 10: if (12) does not hold for device m then ▷ If satisfies, skip uploading. 11: device m transmits ∆qkm to the server. 12: end if 13: end for 14: Server updates θk+1 by the saving previous global quantized gradient qk−1m and the received quantized gradient innovation ∆qkm: θ k+1 := θk − α ( qk−1 + 1/M ∑ m∈Mk ∆q k m ) . 15: Server saves the average quantized gradient qk for the next aggregation. 16: end for The detailed process of AQUILA is comprehensively summarized in Algorithm 1. At epoch k = 0, each device calculates b0m by setting q k−1 0 = 0 and uploads ∆q k 0 to the server since the (12) is not satisfied. At epoch k ∈ {1, 2, ...,K}, the server first broadcasts the global model θk to all devices. Each device m computes ∇f(θkm) with local training data and then utilizes it to calculate an optimal quantization level by (10). Subsequently, each device computes its gradient innovation after quantization and determines whether or not to upload based on the communication criterion (12). Finally, the server updates the new global model θk+1 with up-to-date quantized gradients qk−1m +∆q k m for those devices who transmit the uploads at epoch k, while reusing the old quantized gradients qk−1m for those who skip the uploads. 4 THEORETICAL DERIVATION AND ANALYSIS OF AQUILA As aforementioned, we bound the model deviation caused by skipping updates with respect to quantization bits. Specifically, if the communication criterion (12) holds for the device m at epoch k, it does not contribute to epoch k’s gradient. Otherwise, the loss caused by device m will be minimized with the optimal quantization level selection criterion (10). In this section, the theoretical convergence derivation of AQUILA is based on the following standard assumptions. Assumption 4.1 (L-smoothness). Each local objective function fm is Lm-smooth, i.e., there exist a constant Lm > 0, such that ∀x,y ∈ Rd, ∥∇fm(x)−∇fm(y)∥2 ⩽ Lm ∥x− y∥2 , (13) which implies that the global objective function f is L-smooth with L ≤ L̄ = 1m ∑m i=1 Lm. Assumption 4.2 (Uniform lower bound). For all x ∈ Rd, there exist f∗ ∈ R such that f(x) ≥ f∗. Lemma 4.1. Following the assumption that the function f is L-smooth, we have f(θk+1)−f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 +α ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ∥∥εk∥∥2 2 +(L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 . (14) 4.1 CONVERGENCE ANALYSIS FOR GENERALLY NON-CONVEX CASE. Theorem 4.1. Suppose Assumptions 4.1, 4.2, and B.1 (29) be satisfied. If Mkc ̸= ∅, the global objective function f satisfies f(θk+1)−f(θk)⩽−α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1−θk∥∥∥2 2 + βγ α ∥∥∥θk−θk−1∥∥∥2 2 . (15) Corollary 4.1. Let all the assumptions of Theorem 4.1 hold and L2 − 1 2α + βγ α ⩽ 0, then the AQUILA requires K = O ( 2ω1 αϵ2 ) (16) communication rounds with ω1=f ( θ1 ) −f (θ∗)+βγα ∥∥θ1−θ0∥∥2 2 to achieve mink ∥∇f(θk)∥22 ⩽ ϵ2. Compared to LAG. Corresponding to Eq.(70) in Chen et al. (2018), LAG defines a Lyapunov function Vk := f(θk)− f(θ∗) + ∑D d=1 βd∥θ k+1−d − θk−d∥22 and claims that it satisfies Vk+1 − Vk ≤ − (α 2 − c̃ (α, β1) (1 + ρ)α2 )∥∥∥∇f(θk)∥∥∥2 2 , (17) where c̃ (α, β1) = L/2 − 1/(2α) + β1, β1 = Dξ/(2αη), ξ < 1/D, and ρ > 0. The above result (17) indicates that LAG requires KLAG = O ( 2ω1 (α− 2c̃ (α, β1) (1 + ρ)α2) ϵ2 ) (18) communication rounds to converge. Since the non-negativity of the term c̃ (α, β1) (1 + ρ)α2, we can readily derive that α < α − 2c̃ (α, β1) (1 + ρ)α2, which demonstrates AQUILA achieves a better convergence rate than LAG with the appropriate selection of α. 4.2 CONVERGENCE ANALYSIS UNDER POLYAK-ŁOJASIEWICZ CONDITION. Assumption 4.3 (µ−PŁ condition). Function f satisfies the PL condition with a constant µ > 0, that is, ∥∥∥∇f(θk)∥∥∥2 2 ⩾ 2µ(f(θk)− f(θ∗)). (19) Theorem 4.2. Suppose Assumptions 4.1, 4.2, and 4.3 be satisfied and Mkc ̸= ∅, if the hyperparameters satisfy βγα ⩽ (1− αµ) ( 1 2α − L 2 ) , then the global objective function satisfies f(θk+1)−f(θk)⩽−αµ(f(θk)−f(θ∗))+ ( L 2 − 1 2α )∥∥∥θk+1−θk∥∥∥2 2 + βγ α ∥∥∥θk−θk−1∥∥∥2 2 , (20) and the AQUILA requires K = O ( − 1 log(1− αµ) log ω1 ϵ ) (21) communication round with ω1 = f(θ 1) − f(θ∗) + ( 1 2α − L 2 ) ∥∥θ1 − θ0∥∥2 2 to achieve f(θK+1) − f(θ∗) + ( 12α − L 2 )∥θ K+1 − θK∥22 ⩽ ϵ. Compared to LAG. According to Eq.(50) in Chen et al. (2018), we have that VK ≤ ( 1− αµ+ αµ √ Dξ )K V0, (22) where ξ < 1/D. Thus, we have that LAG requires KLAG = O ( − 1 log(1− αµ+ αµ √ Dξ) log ω1 ϵ ) (23) communication rounds to converge. Compared to Theorem 4.2, we can derive that log(1− αµ) < log(1− αµ+ αµ √ Dξ), which indicates that AQUILA has a faster convergence than LAG under the PŁ condition. Remark. We want to emphasize that LAQ introduces the Lyapunov function into its proof, making it extremely complicated. In addition, LAQ can only guarantee that the final objective function converges to a range of the optimal solution rather than an accurate optimum f(θ∗). Nevertheless, as discussed in Section 3.2, we utilize the precise model difference in AQUILA as a surrogate for the global gradient and thus simplify the proof. 5 EXPERIMENTS AND DISCUSSION 5.1 EXPERIMENT SETUP In this paper, we evaluate AQUILA on CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and WikiText-2 dataset (Merity et al., 2016), considering IID, Non-IID data scenario, and heterogeneous model architecture (which is also a crucial challenge in FL) simultaneously. The FL environment is simulated in Python 3.9 with PyTorch 11.1 (Paszke et al., 2019) implementation. For the diversity of the neural network structures, we train ResNet-18 (He et al., 2016) at CIFAR-10 dataset, MobileNet-v2 (Sandler et al., 2018) at CIFAR-100 dataset, and Transformer (Vaswani et al., 2017) at WikiText-2 dataset. As for the FL system setting, in the majority of our experiments, the whole system exists M = 10 total devices. However, considering the large-scale feature of FL, we also validate AQUILA on a larger system of M = 100/80 total devices for CIFAR / WikiText-2 dataset. The hyperparameters and additional details of our experiments are revealed in Appendix A.3. 5.2 HOMOGENEOUS ENVIRONMENT We first evaluate AQUILA with homogeneous settings where all local models share the same model architecture as the global model. To better demonstrate the effectiveness of AQUILA, its performance is compared with several state-of-the-art methods, including AdaQuantFL, LAQ with fixed levels, LENA (Ghadikolaei et al., 2021), MARINA (Gorbunov et al., 2021), and the naive combination of AdaQuantFL with LAQ. Note that based on this homogeneous setting, we conduct both IID and NonIID evaluations on CIFAR-10 and CIFAR-100 dataset, and an IID evaluation on WikiText-2. To simulate the Non-IID FL setting as (Diao et al., 2020), each device is allocated two classes of data in CIFAR-10 and 10 classes of data in CIFAR-100 at most, and the amount of data for each label is balanced. The experimental results are presented in Fig. 1, where 100% implies all local models share a similar structure with the global model (i.e., homogeneity), 100% (80 devices) denotes the experiment is conducted in an 80 devices system, and LAdaQ represents the naive combination of AdaQuantFL and LAQ. For better illustration, the results have been smoothed by their standard deviation. The solid lines represent values after smoothing, and transparent shades of the same colors around them represent the true values. Additionally, Table 2 shows the total number of bits transmitted by all devices throughout the FL training process. The comprehensive experimental results are established in Appendix A.4. 5.3 NON-HOMOGENEOUS SCENARIO In this section, we also evaluate AQUILA with heterogeneous model structures as HeteroFL (Diao et al., 2020), where the structures of local models trained on the device side are heterogeneous. Suppose the global model at epoch k is θk and its size is d = wg ∗ hg, then the local model of each device m can be selected by θkm = θ k [: wm, : hm], where wm = rmwg and hm = rmhg, respectively. In this paper, we choose model complexity levels rm = 0.5. (f)(e)(d) (b)(a) (c) Most of the symbols in Fig. 2 are identical to the Fig. 1. 100%-50% is a newly introduced symbol that implies half of the devices share the same structure with the global model while another half only have 50% * 50% parameters as the global model. Performance Analysis. First of all, AQUILA achieves a significant transmission reduction compared to the naive combination of LAQ and AdaQuantFL in all datasets, which demonstrates the superiority of AQUILA’s efficiency. Specifically, Table 2 indicates that AQUILA saves 57.49% of transmitted bits in the system of 80 devices at the WikiText-2 dataset and reduces 23.08% of transmitted bits in the system of 100 devices at the CIFAR-100 dataset, compared to the naive combination. And other results in Table 3 also show an obvious reduction in terms of the total transmitted bits required for convergence. Second, in Fig. 1 and Fig. 2, the changing trend of AQUILA’s communication bits per each round clearly verifies the necessity and effectiveness of our well-designed adaptive quantization level and skip criterion. In these two figures, the number of bits transmitted in each round of AQUILA fluctuates a bit, indicating the effectiveness of AQUILA’s selection rule. Meanwhile, the value of transmitted bits remains at quite a low level, suggesting that the adaptive quantization principle makes training more efficient. Moreover, the figures also inform that the quantization level selected by AQUILA will not continuously increase during training instead of being as AdaQuantFL. In addition, based on these two figures, we can also conclude that AQUILA converges faster under the same communication costs. Finally, AQUILA is capable of adapting to a wide range of challenging FL circumstances. In the Non-IID scenario and heterogeneous model structure, AQUILA still outperforms other algorithms by significantly reducing overall transmitted bits while maintaining the same convergence property and objective function value. In particular, AQUILA reduces 60.4% overall communication costs compared to LENA and 57.2% compared to MARINA on average. These experimental results in non-homogeneous FL settings prove that AQUILA can be stably employed in more general and complicated FL scenarios. 5.4 ABLATION STUDY ON THE IMPACT OF TUNING FACTOR β One key contribution of AQUILA is presenting a new lazy aggregation criterion (12) to reduce communication frequency. In this part, we evaluate the effects of the loss performance of different tuning factor β value in Fig. 3. As β grows within a certain range, the convergence speed of the model will slow down (due to lazy aggregation). Still, it will eventually converge to the same model performance while considerably reducing the communication overhead. Nevertheless, increasing the value of β will lead to a decrease in the final model performance since it skips so many essential uploads that make the training deficient. The accuracy (perplexity) comparison of AQUILA with various selections of the tuning factor β is shown in Fig. 10, which indicates the same trend.To sum up, we should choose the value of factor β to maintain the model’s performance and minimize the total transmitted amount of bits. Specifically, we select the value of β = 0.1, 0.25, 1.25 for CIFAR-10, CIFAR-100, and WikiText-2 datasets for our evaluation, respectively. (f)(e)(d) (b)(a) (c) 6 CONCLUSIONS AND FUTURE WORK This paper proposes a communication-efficient FL procedure to simultaneously adjust two mutuallydependent degrees of freedom: communication frequency and quantization level. With the close cooperation of the novel adaptive quantization and adjusted lazy aggregation strategy derived in this paper, the proposed AQUILA has been proven to be capable of reducing the transmitted costs while maintaining the convergence guarantee and model performance compared to existing methods. The evaluation with Non-IID data distribution and various heterogeneous model architectures demonstrates that AQUILA is compatible in a non-homogeneous FL environment. REPRODUCIBILITY We present the overall theorem statements and proofs for our main results in the Appendix, as well as necessary experimental plotting figures. Furthermore, we submit the code of AQUILA in the supplementary material part, including all the hyperparameters and a requirements to help the public reproduce our experimental results. Our algorithm is straightforward, well-described, and easy to implement. ETHICS STATEMENT All evaluations of AQUILA are performed on publicly available datasets for reproducibility purposes. This paper empirically studies the performance of various state-of-art algorithms, therefore, probably introduces no new ethical or cultural problems. This paper does not utilize any new dataset. A APPENDIX The appendix includes supplementary experimental results, mathematical proof of the aforementioned theorems, and a detailed derivation of the novel adaptive quantization criterion and lazy aggregation strategy. Compared to Fig. 1 and Fig. 2 in the main text, the result figures in the appendix show a more comprehensive evaluation with AQUILA, which contains more detailed information including but not limited to accuracy vs steps and training loss vs steps curves. A.1 OVERALL FRAMEWORK OF AQUILA The cooperation of the novel adaptive quantization criterion (10) and lazy aggregation strategy (12) is illustrated in Fig. 4a. Compared to the naive combination of AdaQuantFL and LAQ, where the mutual influence between adaptive quantization and lazy aggregation has not been considered, as shown in Fig. 4b, AQUILA adaptively optimizes the allocation of quantization bits throughout training to promote the convergence of lazy aggregation, and at the same time utilizes the lazy aggregation strategy to improve the efficiency of adaptive quantization by compress the transmission with a lower quantization level. A.2 EXPLANATION OF THE QUANTIZER AND THE SKIP RULE OF LAQ’S The quantizer (6) is a deterministic quantizer that, at each dimension, maps the gradient innovation to the closest point at a one-dimensional grid. The range of the grid is Rkm, and the granularity is determined by quantization level τkm. Each dimension of gradient innovation is mapped to an integer in {0, 1, 2, 3, . . . , 2b − 1}. More precisely, the 1/2 ensures mapping to the closest integer instead of flooring to a smaller integer. The Rkm in the numerator ensures that the mapped integer is non-negative. As a result, when the gradient innovation is transmitted to the central server, 32 bits are used for the range, and b ∗ d bits are used for the mapped integer. Thus, 32+ b ∗ d bits are transmitted in total. The difference between (6) and (32) (Lemma B.2) is that (6) encodes the raw gradient innovation vector to an integer vector, whilst (32) decodes the integer vector to a quantized gradient innovation vector. Specifically, in the training process, each client utilizes (6) to encode the gradient innovation to an integer at each dimension, and afterwards, the integer vector ψkm and τ k m are sent to a central server. After receiving them, the central server can decode the quantized gradient innovation as (32) states. The skip rule of LAQ is measured by the summation of the accumulated model difference and quantization error: ∥∆qkm∥22 ⩽ 1 α2M2 D∑ d′=1 ξd′ ∥∥∥θk+1−d′ − θk−d′∥∥∥2 2 + 3 (∥∥εkm∥∥22 + ∥∥∥ε̂k−1m ∥∥∥22 ) , (24) where ξd′ is a series of manually selected scalars and D is also predetermined. εkm is the quantization error of client m at epoch k, and ε̂k−1m is the quantization of client m at last time it uploads its gradient innovation. Please refer to Sun et al. (2020) for more details on (24). In order to compute the LAQ skip threshold, each client has to store enormous previous information. The difference of AQUILA skipping criterion and LAQ skipping criterion is as follows. First, the AQUILA threshold is easier to compute for a local client. Compared to the LAQ skipping criterion, AQUILA skipping criterion is more concise and thus requires less storage and computing power. Second, the AQUILA criterion is easier to tune because much fewer hyperparameters are introduced. Compared to the LAQ criterion in which α, D and {ξd′}Dd′=1 are all manually selected, whilst only two hyperparameters α and β are introduced in the AQUILA criterion. Third, with the given threshold, AQUILA has a good theoretical property. The theoretical analysis of AQUILA is easier to follow with no Lyapunov function introduced as in LAQ. And the result also shows that AQUILA can achieve a better convergence rate under the non-convex case and the PL condition. A.3 EXPERIMENT SETUP In this section, we provide some extra hyperparameter settings for our evaluation. For the LAQ, we set D = 10 and ξ1 = ξ2 = · · · = ξD = 0.8/D as the same as the setting in their paper. For LENA, we set βLENA = 40 in their trigger condition. And for MARINA, we calculate the uploading probability of Bernoulli distribution as p = ξQ/d as announced in their paper. In addition, we choose the CrossEntropy function as our objective function in the experiment part. Table 1 shows the hyperparameter details of our evaluation. A.4 COMPREHENSIVE EXPERIMENT RESULTS This section will cover all the experimental results in our paper. B BASIC FACTS AND SOME LEMMAS Notations: Bold fonts denote vectors (e.g., θ). Normal fonts denote scalars (e.g., α). Subscript m is used to describe functions about a local device m (e.g., fm(θ)). A function without a subscript is used to describe an average among all devices (e.g., f(θ)). Frequently used norm inequalities Suppose n ∈ N+ and ∥ · ∥2 denotes the ℓ2−norm. For p in R+,xi,a, b ∈ Rd, there holds 1. Norm summation inequality. ∥∥∥∥∥ n∑ i=1 xi ∥∥∥∥∥ 2 2 ⩽ n n∑ i=1 ∥xi∥22 . (25) 2. Inner-product inequality. ⟨a, b⟩ = 1 2 ( ∥a∥22 + ∥b∥22 − ∥a− b∥22 ) . (26) 3. Young’s Inequality. ∥a+ b∥22 ⩽ (1 + p) ∥a∥ 2 2 + (1 + p −1) ∥b∥22 . (27) 4. Minkowski’s Inequality. ∥a+ b∥2 ⩽ ∥a∥2 + ∥b∥2 . (28) Assumption B.1. All devices’ quantization errors εk will be constrained by the total error of the omitted devices., i.e., ∀ k = 0, 1, · · · ,K, if Mkc ̸= ∅, ∃ γ ⩾ 1, such that ∥∥εk∥∥2 2 = ∥∥∥∥∥ 1M ∑ m∈M εkm ∥∥∥∥∥ 2 2 ⩽ γ M2 ∥∥∥∥∥∥ ∑ m∈Mkc εkm ∥∥∥∥∥∥ 2 2 , (29) where K denotes the termination time, and εkm = ∇fm(θ k) − ( qk−1m +∆q k m ) . This lemma is easy to verify when Mkc ̸= ∅, a bounded variable (here is εk) will always be bounded by a part of itself ( 1M ∑ m∈Mkc εkm) multiplied by a real number (γ). Note that there is another nontrivial scenario that Mkc ̸= ∅ but εkm = 0 for all m ∈ Mkc , which implies that γ = 0 or not exists and conflicts with our assumption. However, this situation only happens when all entries of εkm = 0, i.e., [∇fm(θk)]i = [qk−1m ]i for all 0 ⩽ i ⩽ d. Lemma B.1. The summation of quantized gradient innovation and quantization error is bounded by the global model difference: ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ∥∥εk∥∥2 2 ⩽ βγ α2 ∥∥∥θk − θk−1∥∥∥2 2 , (30) Proof. ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ∥∥εk∥∥2 2 (a) ⩽ ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + γ ∥∥∥∥∥∥ 1M ∑ m∈Mkc εkm ∥∥∥∥∥∥ 2 2 (25) ⩽ |Mkc | ∑ m∈Mkc ∥∥∥∥ 1M∆qkm ∥∥∥∥2 2 + γ|Mkc | ∑ m∈Mkc ∥∥∥∥ 1M εkm ∥∥∥∥2 2 = |Mkc | M2 ∑ m∈Mkc (∥∥∆qkm∥∥22 + γ ∥∥εkm∥∥22) (b) ⩽ |Mkc | M2 ∑ m∈Mkc ( γ ∥∥∆qkm∥∥22 + γ ∥∥εkm∥∥22) (c) ⩽ βγ|Mkc |2 α2M2 ∥∥∥θk − θk−1∥∥∥2 2 ⩽ βγ α2 ∥∥∥θk − θk−1∥∥∥2 2 , (31) where (a) follows Assumption B.1, (b) follows γ is larger than 1 by definition, and (c) utilizes our novel trigger condition (12). Lemma B.2. From Definition 3.1, we can derive that the relationship between quantized gradient innovation ∆qkm and its quantization representation ψ k m which applies b k m bits for each dimension: ∆qkm = 2τ k mR k mψ k m −Rkm1, (32) where 1 ∈ Rd denotes a vector filled with scalar value 1. Remark:We can utilize (32) to calculate the quantized gradient innovation in the experimental implementation. C MISSING PROOF OF LEMMA 3.1 AND THE DERIVATION OF bkm With lazy aggregation, the actual aggregated model at epoch k is: θk+1 = θk − α M ∑ m∈Mk ( qk−1m +∆q k m ) − α M ∑ m∈Mkc qk−1m . (33) Suppose ∆km denotes the quantization loss of device m at epoch k and ψ k m denotes the quantization representation of local gradient innovation as in Definition 3.1, i.e., ∆km = ψ k m − ∇fm ( θk ) − qk−1m +Rkm1 2τkmR k m − 1 2 1 (34) With (7), (33), and (34), the model deviation ∥θ̃ k − θk∥22 caused by skipping gradients can be written as: ∥∥∥θ̃k − θk∥∥∥2 2 = ∥∥∥∥∥∥ αM ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 = ∥∥∥∥∥∥ αM ∑ m∈Mkc ( 2τkmR k mψ k m −Rkm1 )∥∥∥∥∥∥ 2 2 (25) ⩽ α2|Mkc | M2 ∑ m∈Mkc ∥∥∥2τkmRkmψkm −Rkm1∥∥∥2 2 (34) ⩽ α2|Mkc | M2 ∑ m∈Mkc (∥∥∥∇fm(θk)− qk−1m +Rkm1+ τkmRkm1+∆km −Rkm1∥∥∥2 2 ) (34) ⩽ 2α2|Mkc | M2 ∑ m∈Mkc (∥∥∥∇fm(θk)− qk−1m + τkmRkm1∥∥∥2 2 + ∥∥∆km∥∥22) (a) ⩽ 2α2|Mkc | M2 ∑ m∈Mkc (∥∥∥∇fm(θk)− qk−1m + τkmRkm1∥∥∥2 2 + d ) (28) ⩽ 2α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 + ∥∥τkmRkm1∥∥2)2 + d) = 2α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2 + 2 ∥∥τkmRkm1∥∥2)2 + d) ⩽ 4α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 + 4 ∥∥τkmRkm1∥∥22 + d2 ) (b) ⩽ 4α2|Mkc | M2 ∑ m∈Mkc ((∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 + 4(Rkm)2d+ d2 ) , (35) where 1 ∈ Rd denotes the vector filled with scalar value 1, (a) ∆km ∈ (−1, 0], (b) Rkm ⩾ τkmRkm ⩾ 0. Since Rkm is independent of τ k m, we can formulate an optimization problem about τ k m for device m at communication round k as follows: min 0<τkm⩽1 (∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 − ∥∥τkmRkm1∥∥2)2 (36) Therefore, the optimal solution of τkm in (36) is (τkm) ∗ = ∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 Rkm √ d . (37) Then, the optimal adaptive quantization level (bkm) ∗ is equal to (bkm) ∗ = ⌊ log2( 1 (τkm) ∗ + 1) ⌋ = log2 Rkm√d∥∥∥∇fm(θk)− qk−1m ∥∥∥ 2 + 1 (38) Notice that (bkm) ∗ ⩾ 1 is always true since (τkm) ∗ ⩽ 1 D MISSING PROOF OF LEMMA 4.1, THEOREM 4.1 AND COROLLARY 4.1. Proof. Suppose Assumptions 4.1, 4.2, and 4.3 are satisfied and Mkc ̸= ∅. For the simplicity of the convergence proof, we assume Φk = 1M ∑ m∈Mkc ∆qkm. First, we prove Lemma 4.1. f(θk+1)− f(θk) ⩽ 〈 ∇f(θk),θk+1 − θk 〉 + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 = 〈 ∇f(θk),−α ( ∇f(θk)− εk − Φk )〉 + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 =− α ∥∥∥∇f(θk)∥∥∥2 2 + α 〈 ∇f(θk), εk +Φk 〉 + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 (26) = − α ∥∥∥∇f(θk)∥∥∥2 2 + α 2 (∥∥∥∇f(θk)∥∥∥2 2 + ∥∥εk +Φk∥∥2 2 − 1 α2 ∥∥∥θk+1 − θk∥∥∥2 2 ) + L 2 ∥∥∥θk+1 − θk∥∥∥2 2 ⩽− α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α 2 ∥∥εk +Φk∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 (25) ⩽ − α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α ∥∥εk∥∥2 2 + α ∥∥Φk∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 . (39) Hence, we have f(θk+1)− f(θk) (30) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 , (40) which gives us Theorem 4.1. Sum it up for k = 1, 2, · · · ,K, we have f(θK+1)− f(θ1) ⩽− α 2 K∑ k=1 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θK+1 − θK∥∥∥2 2 + K−1∑ k=1 ( L 2 − 1 2α + βγ α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥θ1 − θ0∥∥2 2 . (41) Notice that inequality (41) holds for both Mkc ̸= ∅ and Mkc = ∅. Therefore, for ( L 2 − 1 2α + βγ α ) ⩽ 0 and all hyperparameters are chosen properly, considering the minimum of ∥∇f(θk)∥22 min k=1,2,··· ,K ∥∥∥∇f(θk)∥∥∥2 2 ⩽ 1 K K∑ k=1 ∥∥∥∇f(θk)∥∥∥2 2 (41) ⩽ 2 αK ( f(θ1)− f(θK) + βγ α ∥∥θ1 − θ0∥∥2 2 ) . (42) For ( L 2 − 1 2α + βγ α ) ⩽ 0 and all hyperparameters are chosen properly, we have that min k=1,2,··· ,K ∥∥∥∇f(θk)∥∥∥2 2 ⩽ 2 αK ( f(θ1)− f(θ∗) + βγ α ∥∥θ1 − θ0∥∥2 2 ) ⩽ ϵ2, (43) which demonstrates AQUILA requires K = O ( 2ω1 αϵ2 ) communication round with ω1 = f(θ1)− f(θ∗) + βγα ∥∥θ1 − θ0∥∥2 2 to achieve mink=1,2,··· ,K ∥∥∥∇f(θk)∥∥∥2 2 ⩽ ϵ2. E MISSING PROOF OF COROLLARY 4.1 WHEN Mkc = ∅. Proof. Since the skipping subset of devices are the empty set, from (5), we have θk+1 − θk = − α M ∑ m∈Mk ( qk−1m +∆q k m ) − α M ∑ m∈Mkc qk−1m =− α M ∑ m∈M ( qk−1m +∆q k m ) (11) = − α M ∑ m∈M ( ∇fm(θk)− εkm ) =− α ( ∇f(θk)− εk ) . (44) From (14) we have: f(θk+1)− f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α ∥∥∥∥∥∥ 1M ∑ m∈Mkc ∆qkm ∥∥∥∥∥∥ 2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + α ∥∥εk∥∥2 2 ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + α ∥∥εk∥∥2 2 (27) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + α2 ( L 2 − 1 2α )( (1 + p) ∥∥∥∇f(θk)∥∥∥2 2 + ( 1 + p−1 ) ∥∥εk∥∥2 2 ) + α ∥∥εk∥∥2 2 = −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + 1 2 ( α2L− α ) (1 + p) ∥∥∥∇f(θk)∥∥∥2 2 + 1 2 ( α2L− α ) ( 1 + p−1 ) ∥∥εk∥∥2 2 + α ∥∥εk∥∥2 2 = α 2 ((αL− 1) (1 + p)− 1) ∥∥∥∇f(θk)∥∥∥2 2 + α 2 ( (αL− 1) ( 1 + p−1 ) + 2 ) ∥∥εk∥∥2 2 . (45) If the factor of ∥∥εk∥∥2 2 in (45) is less than or equal to 0, that is, (αL− 1) ( 1 + p−1 ) + 2 ⩽ 0, (46) then the factor of ∥∇f(θk)∥22 will be less than −α2 , which indicates that f(θk+1)− f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 . (47) Note that it is not difficult to demonstrate that (46) and L2 − 1 2α + βγ α ⩽ 0 can actually be satisfied at the same time. For instance, we can set p = 0.1, α = 0.1, β = 0.25, γ = 2, L = 2.5 that satisfies both of them. F MISSING PROOF OF THEOREM 4.2. Proof. Based on the intermediate result (40) of Theorem 4.1 and Assumption 4.4 (µ−PŁ condition), we have f(θk+1)− f(θk) ⩽ −α 2 ∥∥∥∇f(θk)∥∥∥2 2 + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 (19) ⩽ −αµ(f(θk)− f(θ∗)) + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 , (48) which is equivalent to f(θk+1)− f(θ∗) (19) ⩽ (1− αµ)(f(θk)− f(θ∗)) + ( L 2 − 1 2α )∥∥∥θk+1 − θk∥∥∥2 2 + βγ α ∥∥∥θk − θk−1∥∥∥2 2 . (49) Suppose βγα ⩽ (1− αµ) ( 1 2α − L 2 ) , we can show that f(θk+1)− f(θ∗) + ( 1 2α − L 2 )∥∥∥θk+1 − θk∥∥∥2 2 ⩽ (1− αµ) ( f(θk)− f(θ∗) + ( 1 2α − L 2 )∥∥∥θk − θk−1∥∥∥2 2 ) . (50) Therefore, after multiply k = 1, 2, · · · ,K, we have f(θK+1)− f(θ∗) + ( 1 2α − L 2 )∥∥∥θK+1 − θK∥∥∥2 2 ⩽ (1− αµ)K ( f(θ1)− f(θ∗) + ( 1 2α − L 2 )∥∥θ1 − θ0∥∥2 2 ) ⩽ ϵ, (51) which demonstrates AQUILA requires K = O ( − 1log(1−αµ) log ω1 ϵ ) communication round with ω1 = f(θ 1)− f(θ∗) + ( 1 2α − L 2 ) ∥θ1 − θ0∥22 to achieve f(θ K+1)− f(θ∗) + ( 1 2α − L 2 ) ∥θK+1 − θK∥22 ⩽ ϵ.
1. What is the focus of the paper regarding federated learning, and what are the proposed adaptations to improve the current methods? 2. What are the strengths and weaknesses of the paper, particularly in terms of its contributions, writing quality, and experimental results? 3. Do you have any questions or concerns about the paper's content, such as the choice of norm, the use of outliers, the communication criteria, the convergence analysis, the assumption made, and the experiment design? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a new adaptive quantization strategy (and aggregation criteria), AQUILA, by adaptively determining the quantization bits per round per client in lazily aggreagated federated learning. The quantization level b ∗ is chosen based on the l ∞ and l 2 norm of the difference between the current gradient and the previous quantized gradient. Convergence rates in non-convex and PL condition are provided. Experiments are conducted to show the advantages of the proposed method. Strengths And Weaknesses Pros: The topic of federated learning (FL) is important and interesting to NeurIPS community. The paper provides both theoretical and empirical results. The writing and presentation, though can still be improved, is in general clear. Cons: (i) The motivation is not very strong. While alternating the number of bits b across training rounds might be effective, it is not guaranteed to always bring benefits (reduced communication). Specifically, in formula (10) when choosing b ∗ , it seems that the number of bits largely relies on the largest entry of the vector ( l ∞ norm). What if the vector only has a few 'outliers'? This strategy would use a large b to compromise those outliers. A more practical and possibly better strategy might be sending those large coordinates in full-precision, and quantize others using low bits. This may use much less communication than the proposed strategy. In my understanding, (10) basically says that if the largest magnitude of the vector is big, we use more bits. This does not look very exciting and promising to me. (ii) From Figure 2 and Figure 3, we see that the communication across training rounds of AQUILA is almost constant (a flat line). Then why not simply using a fixed number of bits? I think a figure with the evolution of the b ∗ for the clients might help. (iii) The communication criteria (11) looks more like a byproduct for the convenience of proof (e.g., Lemma A.1). Could you please provide more intuition on it and its difference with (6)? Besides, I have two questions regarding (11). First, how do you know θ k + 1 at time k ? Second, are both γ and β free hyperparameters? If so, the algorithm becomes much harder to tune which is also a potential drawback. The convergence analysis considers full gradient without noise, which is less practical since people use SGD in most applications. On page 6, the authors wrote 'we could still prove the correctness of Corollary 4.1 in this special condition without any extra assumptions'. Then why is Assumption 4.3 needed? Indeed, Assumption 4.3 seems a little bit strange. In (13), is the γ the same as that in (11)? Also, you can assume a very large γ for this condition to hold. However, I do not see this γ appearing in later theoretical results. So how does this assumption affect the analysis? The experiments uses M = 24 clients which is rather small for FL. Also, there is no ablation study on the impact of γ and β . Thus we do not know how robust this algorithm is to different choices of the hyperparameters. I think the authors should also compare with strategies with fixed-bit quantization like the popular QSGD method. This could help justify the importance of adaptive quantization and strengthen the motivation. Currently, given the above concerns, I think the motivation is not very strong. Clarity, Quality, Novelty And Reproducibility Please see above.
ICLR
Title ROBUST ESTIMATION VIA GENERATIVE ADVERSARIAL NETWORKS Abstract Robust estimation under Huber’s -contamination model has become an important topic in statistics and theoretical computer science. Statistically optimal procedures such as Tukey’s median and other estimators based on depth functions are impractical because of their computational intractability. In this paper, we establish an intriguing connection between f -GANs and various depth functions through the lens of f -Learning. Similar to the derivation of f GANs, we show that these depth functions that lead to statistically optimal robust estimators can all be viewed as variational lower bounds of the total variation distance in the framework of f -Learning. This connection opens the door of computing robust estimators using tools developed for training GANs. In particular, we show in both theory and experiments that some appropriate structures of discriminator networks with hidden layers in GANs lead to statistically optimal robust location estimators for both Gaussian distribution and general elliptical distributions where first moment may not exist. 1 INTRODUCTION In the setting of Huber’s -contamination model (Huber, 1964; 1965), one has i.i.d observations X1, ..., Xn ∼ (1− )Pθ + Q, (1) and the goal is to estimate the model parameter θ. Under the data generating process (1), each observation has a 1 − probability to be drawn from Pθ and the other probability to be drawn from the contamination distributionQ. The presence of an unknown contamination distribution poses both statistical and computational challenges to the problem. For example, consider a normal mean estimation problem with Pθ = N(θ, Ip). Due to the contamination of data, the sample average, which is optimal when = 0, can be arbitrarily far away from the true mean if Q charges a positive probability at infinity. Moreover, even robust estimators such as coordinatewise median and geometric median are proved to be suboptimal under the setting of (1) (Chen et al., 2018; Diakonikolas et al., 2016a; Lai et al., 2016). The search for both statistically optimal and computationally feasible procedures has become a fundamental problem in areas including statistics and computer science. For the normal mean estimation problem, it has been shown in Chen et al. (2018) that the minimax rate with respect to the squared `2 loss is pn ∨ 2, and is achieved by Tukey’s median (Tukey, 1975). Despite the statistical optimality of Tukey’s median, its computation is not tractable. In fact, even an approximate algorithm takes O(eCp) in time (Amenta et al., 2000; Chan, 2004; Rousseeuw & Struyf, 1998). Recent developments in theoretical computer science are focused on the search of computationally tractable algorithms for estimating θ under Huber’s -contamination model (1). The success of the efforts started from two fundamental papers Diakonikolas et al. (2016a); Lai et al. (2016), where two different but related computational strategies “iterative filtering” and “dimension halving” were proposed to robustly estimate the normal mean. These algorithms can provably achieve the minimax rate pn ∨ 2 up to a poly-logarithmic factor in polynomial time. The main idea behind the two methods is a critical fact that a good robust moment estimator can be certified efficiently by higher moments. This idea was later further extended (Diakonikolas et al., 2017; Du et al., 2017; Diakonikolas et al., 2016b; 2018a;c;b; Kothari et al., 2018) to develop robust and computable procedures for various other problems. However, many of the computationally feasible procedures for robust mean estimation in the literature rely on the knowledge of covariance matrix and sometimes the knowledge of contamination proportion. Even though these assumptions can be relaxed, nontrivial modifications of the algorithms are required for such extensions and statistical error rates may also be affected. Compared with these computationally feasible procedures proposed in the recent literature for robust estimation, Tukey’s median (9) and other depth-based estimators (Rousseeuw & Hubert, 1999; Mizera, 2002; Zhang, 2002; Mizera & Müller, 2004; Paindaveine & Van Bever, 2017) have some indispensable advantages in terms of their statistical properties. First, the depth-based estimators have clear objective functions that can be interpreted from the perspective of projection pursuit (Mizera, 2002). Second, the depth-based procedures are adaptive to unknown nuisance parameters in the models such as covariance structures, contamination proportion, and error distributions (Chen et al., 2018; Gao, 2017). Last but not least, Tukey’s depth and other depth functions are mostly designed for robust quantile estimation, while the recent advancements in the theoretical computer science literature are all focused on robust moments estimation. Although this is not an issue when it comes to normal mean estimation, the difference is fundamental for robust estimation under general settings such as elliptical distributions where moments do not necessarily exist. Given the desirable statistical properties discussed above, this paper is focused on the development of computational strategies of depth-like procedures. Our key observation is that robust estimators that are maximizers of depth functions, including halfspace depth, regression depth and covariance matrix depth, can all be derived under the framework of f -GAN (Nowozin et al., 2016). As a result, these depth-based estimators can be viewed as minimizers of variational lower bounds of the total variation distance between the empirical measure and the model distribution (Proposition 2.1). This observation allows us to leverage the recent developments in the deep learning literature to compute these variational lower bounds through neural network approximations. Our theoretical results give insights on how to choose appropriate neural network classes that lead to minimax optimal robust estimation under Huber’s -contamination model. In particular, Theorem 3.1 and 3.2 characterize the networks which can robustly estimate the Gaussian mean by TV-GAN and JS-GAN, respectively; Theorem 4.1 is an extension to robust location estimation under the class of elliptical distributions which includes Cauchy distribution whose mean does not exist. Numerical experiments in Section 5 are provided to show the success of these GANs. 2 ROBUST ESTIMATION AND f -GAN We start with the definition of f -divergence (Csiszár, 1964; Ali & Silvey, 1966). Given a strictly convex function f that satisfies f(1) = 0, the f -GAN between two probability distributions P and Q is defined by Df (P‖Q) = ∫ f ( p q ) dQ. (2) Here, we use p(·) and q(·) to stand for the density functions of P and Q with respect to some common dominating measure. For a fully rigorous definition, see Polyanskiy & Wu (2017). Let f∗ be the convex conjugate of f . That is, f∗(t) = supu∈domf (ut− f(u)). A variational lower bound of (2) is Df (P‖Q) ≥ sup T∈T [EPT (X)− EQf∗(T (X))] . (3) Note that the inequality (3) holds for any class T , and it becomes an equality whenever the class T contains the function f ′ (p/q) (Nguyen et al., 2010). For notational simplicity, we also use f ′ for an arbitrary element of the subdifferential when the derivative does not exist. With i.i.d. observations X1, ..., Xn ∼ P , the variational lower bound (3) naturally leads to the following learning method P̂ = argmin Q∈Q sup T∈T [ 1 n n∑ i=1 T (Xi)− EQf∗(T (X)) ] . (4) The formula (4) is a powerful and general way to learn the distribution P from its i.i.d. observations. It is known as f -GAN (Nowozin et al., 2016), an extension of GAN (Goodfellow et al., 2014), which stands for generative adversarial networks. The idea is to find a P̂ so that the best discriminator T in the class T cannot tell the difference between P̂ and the empirical distribution 1n ∑n i=1 δXi . 2.1 f -LEARNING: A UNIFIED FRAMEWORK Our f -Learning framework is based on a special case of the variational lower bound (3). That is, Df (P‖Q) ≥ sup Q̃∈Q̃Q [ EP f ′ ( q̃(X) q(X) ) − EQf∗ ( f ′ ( q̃(X) q(X) ))] , (5) where q̃(·) stands for the density function of Q̃. Note that here we allow the class Q̃Q to depend on the distribution Q in the second argument of Df (P‖Q). Compare (5) with (3), and it is easy to realize that (5) is a special case of (3) with T = TQ = { f ′ ( q̃ q ) : q̃ ∈ Q̃Q } . (6) Moreover, the inequality (5) becomes an equality as long as P ∈ Q̃Q. The sample version of (5) leads to the following learning method P̂ = argmin Q∈Q sup Q̃∈Q̃Q [ 1 n n∑ i=1 f ′ ( q̃(Xi) q(Xi) ) − EQf∗ ( f ′ ( q̃(X) q(X) ))] . (7) The learning method (7) will be referred to as f -Learning in the sequel. It is a very general framework that covers many important learning procedures as special cases. For example, consider the special case where Q̃Q = Q̃ independent of Q, Q = Q̃, and f(x) = x log x. Direct calculations give f ′(x) = log x + 1 and f∗(t) = et−1. Therefore, (7) becomes P̂ = argmin Q∈Q sup Q̃∈Q 1 n n∑ i=1 log q̃(Xi) q(Xi) = argmax q∈Q 1 n n∑ i=1 log q(Xi), which is the maximum likelihood estimator (MLE). 2.2 TV-LEARNING AND DEPTH-BASED ESTIMATORS An important generator f that we will discuss here is f(x) = (x−1)+. This leads to the total variation distance Df (P‖Q) = 12 ∫ |p− q|. With f ′(x) = I{x ≥ 1} and f∗(t) = tI{0 ≤ t ≤ 1}, the TV-Learning is given by P̂ = argmin Q∈Q sup Q̃∈QQ [ 1 n n∑ i=1 I { q̃(Xi) q(Xi) ≥ 1 } −Q ( q̃ q ≥ 1 )] . (8) A closely related idea was previously explored by Yatracos (1985); Devroye & Lugosi (2012). The following proposition shows that when Q̃Q approaches toQ in some neighborhood, TV-Learning leads to robust estimators that are defined as the maximizers of various depth functions including Tukey’s depth, regression depth, and covariance depth. Proposition 2.1. The TV-Learning (8) includes the following special cases: 1. Tukey’s halfspace depth: Take Q = {N(η, Ip) : η ∈ Rp} and Q̃η = {N(η̃, Ip) : ‖η̃ − η‖ ≤ r}. As r → 0, (8) becomes θ̂ = argmax η∈Rp inf ‖u‖=1 1 n n∑ i=1 I{uT (Xi − η) ≥ 0}. (9) 2. Regression depth: Take Q = { Py,X = Py|XPX : Py|X = N(X T η, 1), η ∈ Rp } , and Q̃η = { Py,X = Py|XPX : Py|X = N(X T η̃, 1), ‖η̃ − η‖ ≤ r } . As r → 0, (8) becomes θ̂ = argmax η∈Rp inf ‖u‖=1 1 n n∑ i=1 I{uTXi(yi −XTi η) ≥ 0}. (10) 3. Covariance matrix depth: Take Q = {N(0,Γ) : Γ ∈ Ep}, where Ep stands for the class of p × p covariance matrices, and Q̃Γ = { N(0, Γ̃) : Γ̃−1 = Γ−1 + r̃uuT ∈ Ep, |r̃| ≤ r, ‖u‖ = 1 } . As r → 0, (8) becomes Σ̂ = argmin Γ∈Ep sup ‖u‖=1 [( 1 n n∑ i=1 I{|uTXi|2 ≤ uTΓu} − P(χ21 ≤ 1) ) (11) ∨ ( 1 n n∑ i=1 I{|uTXi|2 > uTΓu} − P(χ21 > 1) )] . The formula (9) is recognized as Tukey’s median, the maximizer of Tukey’s halfspace depth. A traditional understanding of Tukey’s median is that (9) maximizes the halfspace depth (Donoho & Gasko, 1992) so that θ̂ is close to the centers of all one-dimensional projections of the data. In the f -Learning framework, N(θ̂, Ip) is understood to be the minimizer of a variational lower bound of the total variation distance. The formula (10) gives the estimator that maximizes the regression depth proposed by Rousseeuw & Hubert (1999). It is worth noting that the derivation of (10) does not depend on the marginal distribution PX in the linear regression model. Finally, (11) is related to the covariance matrix depth (Zhang, 2002; Chen et al., 2018; Paindaveine & Van Bever, 2017). All of the estimators (9), (10) and (11) are proved to achieve the minimax rate for the corresponding problems under Huber’s -contamination model (Chen et al., 2018; Gao, 2017). 2.3 FROM f -LEARNING TO f -GAN The connection to various depth functions shows the importance of TV-Learning in robust estimation. However, it is well-known that depth-based estimators are very hard to compute (Amenta et al., 2000; van Kreveld et al., 1999; Rousseeuw & Struyf, 1998), which limits their applications only for very low-dimensional problems. On the other hand, the general f -GAN framework (4) has been successfully applied to learn complex distributions and images in practice (Goodfellow et al., 2014; Radford et al., 2015; Salimans et al., 2016). The major difference that gives the computational advantage to f -GAN is its flexibility in terms of designing the discriminator class T using neural networks compared with the pre-specified choice (6) in f -Learning. While f -Learning provides a unified perspective in understanding various depth-based procedures in robust estimation, we can step back into the more general f -GAN for its computational advantages, and to design efficient computational strategies. 3 ROBUST MEAN ESTIMATION VIA GAN In this section, we focus on the problem of robust mean estimation under Huber’s -contamination model. Our goal is to reveal how the choice of the class of discriminators affects robustness and statistical optimality under the simplest possible setting. That is, we have i.i.d. observations X1, ..., Xn ∼ (1− )N(θ, Ip) + Q, and we need to estimate the unknown location θ ∈ Rp with the contaminated data. Our goal is to achieve the minimax rate pn ∨ 2 with respect to the squared `2 loss uniformly over all θ ∈ Rp and all Q. 3.1 RESULTS FOR TV-GAN We start with the total variation GAN (TV-GAN) with f(x) = (x − 1)+ in (4). For the Gaussian location family, (4) can be written as θ̂ = argmin η∈Rp max D∈D [ 1 n n∑ i=1 D(Xi)− EN(η,Ip)D(X) ] , (12) with T (x) = D(x) in (4). Now we need to specify the class of discriminatorsD to solve the classification problem between N(η, Ip) and the empirical distribution 1n ∑n i=1 δXi . One of the simplest discriminator classes is the logistic regression, D = { D(x) = sigmoid(wTx+ b) : w ∈ Rp, b ∈ R } . (13) With D(x) = sigmoid(wTx+ b) = (1 + e−w T x−b)−1 in (13), the procedure (12) can be viewed as a smoothed version of TV-Learning (8). To be specific, the sigmoid function sigmoid(wTx + b) tends to an indicator function as ‖w‖ → ∞, which leads to a procedure very similar to (9). In fact, the class (13) is richer than the one used in (9), and thus (12) can be understood as the minimizer of a sharper variational lower bound than that of (9). Theorem 3.1. Assume pn + 2 ≤ c for some sufficiently small constant c > 0. With i.i.d. observations X1, ..., Xn ∼ (1− )N(θ, Ip) + Q, the estimator θ̂ defined by (12) satisfies ‖θ̂ − θ‖2 ≤ C ( p n ∨ 2 ) , with probability at least 1 − e−C′(p+n 2) uniformly over all θ ∈ Rp and all Q. The constants C,C ′ > 0 are universal. Though TV-GAN can achieve the minimax rate pn ∨ 2 under Huber’s contamination model, it may suffer from optimization difficulties especially when the distributions Q and N(θ, Ip) are far away from each other, as shown in Figure 1. 3.2 RESULTS FOR JS-GAN Given the intractable optimization property of TV-GAN, we next turn to Jensen-Shannon GAN (JS-GAN) with f(x) = x log x− (x+ 1) log x+12 . The estimator is defined by θ̂ = argmin η∈Rp max D∈D [ 1 n n∑ i=1 logD(Xi) + EN(η,Ip) log(1−D(Xi)) ] + log 4, (14) with T (x) = logD(x) in (4). This is exactly the original GAN (Goodfellow et al., 2014) specialized to the normal mean estimation problem. The advantages of JS-GAN over other forms of GAN have been studied extensively in the literature (Lucic et al., 2017; Kurach et al., 2018). Unlike TV-GAN, our experiment results show that (14) with the logistic regression discriminator class (13) is not robust to contamination. However, if we replace (13) by a neural network class with one or more hidden layers, the estimator will be robust and will also work very well numerically. To understand why and how the class of the discriminators affects the robustness property of JS-GAN, we introduce a new concept called restricted Jensen-Shannon divergence. Let g : Rp → Rd be a function that maps a p-dimensional observation to a d-dimensional feature space. The restricted Jensen-Shannon divergence between two probability distributions P and Q with respect to the feature g is defined as JSg(P,Q) = max w∈W [ EP log sigmoid(w T g(X)) + EQ log(1− sigmoid(wT g(X))) ] + log 4. In other words, P and Q are distinguished by a logistic regression classifier that uses the feature g(X). It is easy to see that JSg(P,Q) is a variational lower bound of the original Jensen-Shannon divergence. The key property of JSg(P,Q) is given by the following proposition. Proposition 3.1. AssumeW is a convex set that contains an open neighborhood of 0. Then, JSg(P,Q) = 0 if and only if EP g(X) = EQg(X). The proposition asserts that JSg(·, ·) cannot distinguish P and Q if the feature g(X) has the same expected value under the two distributions. This generalized moment matching effect has also been studied by Liu et al. (2017) for general f -GANs. However, the linear discriminator class considered in Liu et al. (2017) is parameterized in a different way compared with the discriminator class here. When we apply Proposition 3.1 to robust mean estimation, the JS-GAN is trying to match the values of 1 n ∑n i=1 g(Xi) and EN(η,Iη)g(X) for the feature g(X) used in the logistic regression classifier. This explains what we observed in our numerical experiments. A neural net without any hidden layer is equivalent to a logistic regression with a linear feature g(X) = (XT , 1)T ∈ Rp+1. Therefore, whenever η = 1n ∑n i=1Xi, we have JSg ( 1 n ∑n i=1 δXi , N(η, Ip) ) = 0, which implies that the sample mean is a global maximizer of (14). On the other hand, a neural net with at least one hidden layers involves a nonlinear feature function g(X), which is the key that leads to the robustness of (14). We will show rigorously that a neural net with one hidden layer is sufficient to make (14) robust and optimal. Consider the following class of discriminators, D = D(x) = sigmoid ∑ j≥1 wjσ(u T j x+ bj) : ∑ j≥1 |wj | ≤ κ, uj ∈ Rp, bj ∈ R . (15) The class (15) consists of two-layer neural network functions. While the dimension of the input layer is p, the dimension of the hidden layer can be arbitrary, as long as the weights have a bounded `1 norm. The nonlinear activation function σ(·) is allowed to take 1) indicator: σ(x) = I{x ≥ 1}, 2) sigmoid: σ(x) = 11+e−x , 3) ramp: σ(x) = max(min(x + 1/2, 1), 0). Other bounded activation functions are also possible, but we do not exclusively list them. The rectified linear unit (ReLU) will be studied in Appendix A. Theorem 3.2. Consider the estimator θ̂ defined by (14) with D specified by (15). Assume pn + 2 ≤ c for some sufficiently small constant c > 0, and set κ = O (√ p n + ) . With i.i.d. observations X1, ..., Xn ∼ (1− )N(θ, Ip) + Q, we have ‖θ̂ − θ‖2 ≤ C ( p n ∨ 2 ) , with probability at least 1 − e−C′(p+n 2) uniformly over all θ ∈ Rp and all Q. The constants C,C ′ > 0 are universal. 4 ELLIPTICAL DISTRIBUTIONS An advantage of Tukey’s median (9) is that it leads to optimal robust location estimation under general elliptical distributions such as Cauchy distribution whose mean does not exist. In this section, we show that JS-GAN shares the same property. A random vectorX ∈ Rp follows an elliptical distribution if it admits a representation X = θ + ξAU, where U is uniformly distributed on the unit sphere {u ∈ Rp : ‖u‖ = 1} and ξ ≥ 0 is a random variable independent of U that determines the shape of the elliptical distribution (Fang, 2017). The center and the scatter matrix is θ and Σ = AAT . For a unit vector v, let the density function of ξvTU be h. Note that h is independent of v because of the symmetry of U . Then, there is a one-to-one relation between the distribution of ξ and h, and thus the triplet (θ,Σ, h) fully parametrizes an elliptical distribution. Note that h and Σ = AAT are not identifiable, because ξA = (cξ)(c−1A) for any c > 0. Therefore, without loss of generality, we can restrict h to be a member of the following class H = { h : h(t) = h(−t), h ≥ 0, ∫ h = 1, ∫ σ(t)(1− σ(t))h(t)dt = 1 } . This makes the parametrization (θ,Σ, h) of an elliptical distribution fully identifiable, and we use EC(θ,Σ, h) to denote an elliptical distribution parametrized in this way. The JS-GAN estimator is defined as (θ̂, Σ̂, ĥ) = argmin η∈Rp,Γ∈Ep(M),g∈H max D∈D [ 1 n n∑ i=1 logD(Xi) + EEC(η,Γ,g) log(1−D(X)) ] + log 4, (16) where Ep(M) is the set of all positive semi-definite matrix with spectral norm bounded by M . Theorem 4.1. Consider the estimator θ̂ defined above withD specified by (15). AssumeM = O(1), pn+ 2 ≤ c for some sufficiently small constant c > 0, and set κ = O (√ p n + ) . With i.i.d. observations X1, ..., Xn ∼ (1− )EC(θ,Σ, h) + Q, we have ‖θ̂ − θ‖2 ≤ C ( p n ∨ 2 ) , with probability at least 1 − e−C′(p+n 2) uniformly over all θ ∈ Rp, Σ ∈ Ep(M) and all Q. The constants C,C ′ > 0 are universal. Remark 4.1. The result of Theorem 4.1 also holds (and is proved) under the strong contamination model (Diakonikolas et al., 2016a). That is, we have i.i.d. observations X1, ..., Xn ∼ P for some P satisfying TV(P,EC(θ,Σ, h)) ≤ . See its proof in Appendix D.2. Note that Theorem 4.1 guarantees the same convergence rate as in the Gaussian case for all elliptical distributions. This even includes multivariate Cauchy where mean does not exist. Therefore, the location estimator (16) is fundamentally different from Diakonikolas et al. (2016a); Lai et al. (2016), which is only designed for robust mean estimation. We will show such a difference in our numerical results. To achieve rate-optimality for robust location estimation under general elliptical distributions, the estimator (16) is different from (14) only in the generator class. They share the same discriminator class (15). This underlines an important principle for designing GAN estimators: the overall statistical complexity of the estimator is only determined by the discriminator class. The estimator (16) also outputs (Σ̂, ĥ), but we do not claim any theoretical property for (Σ̂, ĥ) in this paper. This will be systematically studied in a future project. 5 NUMERICAL EXPERIMENTS In this section, we give extensive numerical studies of robust mean estimation via GAN. After introducing the implementation details in Section 5.1, we verify our theoretical results on minimax estimation with both TVGAN and JS-GAN in Section 5.2. Comparison with other methods on robust mean estimation in the literature is given in Section 5.3. The effects of various network structures are studied in Section 5.4. Adaptation to unknown covariance is studied in Section 5.5. In all these cases, we assume i.i.d. observations are drawn from (1− )N(0p, Ip) + Q with and Q to be specified. Finally, adaptation to elliptical distributions is studied in Section 5.6. 5.1 IMPLEMENTATIONS We adopt the standard algorithmic framework of f -GANs (Nowozin et al., 2016) for the implementation of JSGAN and TV-GAN for robust mean estimation. In particular, the generator for mean estimation is Gη(Z) = Z + η with Z ∼ N(0p, Ip); the discriminator D is a multilayer perceptron (MLP), where each layer consisting of a linear map and a sigmoid activation function and the number of nodes will vary in different experiments to be specified below. Details related to algorithms, tuning, critical hyper-parameters, structures of discriminator networks and other training tricks for stabilization and acceleration are discussed in Appendix B.1. A PyTorch implementation is available at https://github.com/zhuwzh/Robust-GAN-Center. 5.2 NUMERICAL SUPPORTS FOR THE MINIMAX RATES We verify the minimax rates achieved by TV-GAN (Theorem 3.1) and JS-GAN (Theorem 3.2) via numerical experiments. Two main scenarios we consider here are √ p/n < and √ p/n > , where in both cases, various types of contamination distributions Q are considered. Specifically, the choice of contamination distributions Q includes N(µ ∗ 1p, Ip) with µ ranges in {0.2, 0.5, 1, 5}, N(0.5 ∗ 1p,Σ) and Cauchy(τ ∗ 1p). Details of the construction of the covariance matrix Σ is given in Appendix B.2. The distribution Cauchy(τ ∗ 1p) is obtained by combining p independent one-dimensional standard Cauchy with location parameter τj = 0.5. The main experimental results are summarized in Figure 2, where the `2 error we present is the maximum error among all choices of Q, and detailed numerical results can be founded in Tables 7, 8 and 9 in Appendix. We separately explore the relation between the error and one of , √ p and 1/ √ n with the other two parameters fixed. The study of the relation between the `2 error and is in the regime √ p/n < so that dominates the minimax rate. The scenario √ p/n > is considered in the study of the effects of √ p and 1/ √ n. As is shown in Figure 2, the errors are approximately linear against the corresponding parameters in all cases, which empirically verifies the conclusions of Theorem 3.1 and Theorem 3.2. 5.3 COMPARISONS WITH OTHER METHODS We perform additional experiments to compare with other methods including dimension halving (Lai et al., 2016) and iterative filtering (Diakonikolas et al., 2017) under various settings. We emphasize that our method does not require any knowledge about the nuisance parameters such as the contamination proportion . Tuning GAN is only a matter of optimization and one can tune parameters based on the objective function only. Table 1 shows the performances of JS-GAN, TV-GAN, dimension halving, and iterative filtering. The network structure, for both JS-GAN and TV-GAN, has one hidden layer with 20 hidden units when the sample size is 50,000 and 2 hidden units when sample size is 5,000. The critical hyper-parameters we apply is given in Appendix and it turns out that the choice of the hyper-parameter is robust against different models when the net structures are the same. To summarize, our method outperforms other algorithms in most cases. TV-GAN is good at cases when Q and N(0p, Ip) are non-separable but fails when Q is far away from N(0p, Ip) due to optimization issues discussed in Section 3.1 (Figure 1). On the other hand, JS-GAN stably achieves the lowest error in separable cases and also shows competitive performances for non-separable ones. 5.4 NETWORK STRUCTURES We further study the performance of JS-GAN with various structures of neural networks. The main observation is tuning networks with one-hidden layer becomes tough as the dimension grows (e.g. p ≥ 200), while a deeper network can significantly refine the situation perhaps by improving the landscape. Some experiment results are given in Table 2. On the other hand, one-hidden layer performs not worse than deeper networks when dimension is not very large (e.g. p ≤ 100). More experiments are given in Appendix B.4. Additional theoretical results for deep neural nets are given in Appendix A. 5.5 ADAPTATION TO UNKNOWN COVARIANCE The robust mean estimator constructed through JS-GAN can be easily made adaptive to unknown covariance structure, which is a special case of (16). We define (θ̂, Σ̂) = argmin η∈Rp,Γ∈Ep max D∈D [ 1 n n∑ i=1 logD(Xi) + EN(η,Γ) log(1−D(Xi)) ] + log 4, The estimator θ̂, as a result, is rate-optimal even when the true covariance matrix is not necessarily identity and is unknown (see Theorem 4.1). Below, we demonstrate some numerical evidence of the optimality of θ̂ as well as the error of Σ̂ in Table 3. 5.6 ADAPTATION TO ELLIPTICAL DISTRIBUTIONS We consider the estimation of the location parameter θ in elliptical distribution EC(θ,Σ, h) by the JS-GAN defined in (16). In particular, we study the case with i.i.d. observationsX1, ..., Xn ∼ (1− )Cauchy(θ, Ip)+ Q. The density function of Cauchy(θ,Σ) is given by p(x; θ,Σ) ∝ |Σ|−1/2 ( 1 + (x− θ)TΣ−1(x− θ) )−(1+p)/2 . Compared with Algorithm (1), the difference lies in the choice of the generator. We consider the generator G1(ξ, U) = gω(ξ)U + θ, where gω(ξ) is a non-negative neural network parametrized by ω and some random variable ξ. The random vector U is sampled from the uniform distribution on {u ∈ Rp : ‖u‖ = 1}. If the scatter matrix is unknown, we will use the generatorG2(ξ, U) = gω(ξ)AU+θ, withAAT modeling the scatter matrix. Table 4 shows the comparison with other methods. Our method still works well under Cauchy distribution, while the performance of other methods that rely on moment conditions deteriorates in this setting. ACKNOWLEDGEMENT The research of Chao Gao was supported in part by NSF grant DMS-1712957 and NSF Career Award DMS1847590. The research of Yuan Yao was supported in part by Hong Kong Research Grant Council (HKRGC) grant 16303817, National Basic Research Program of China (No. 2015CB85600), National Natural Science Foundation of China (No. 61370004, 11421110001), as well as awards from Tencent AI Lab, Si Family Foundation, Baidu Big Data Institute, and Microsoft Research-Asia. A ADDITIONAL THEORETICAL RESULTS In this section, we investigate the performance of discriminator classes of deep neural nets with the ReLU activation function. Since our goal is to learn a p-dimensional mean vector, a deep neural network discriminator without any regularization will certainly lead to overfitting. Therefore, it is crucial to design a network class with some appropriate regularizations. Inspired by the work of Bartlett (1997); Bartlett & Mendelson (2002), we consider a network class with `1 regularizations on all layers except for the second last layer with an `2 regularization. With GH1 (B) = { g(x) = ReLU(vTx) : ‖v‖1 ≤ B } , a neural network class with l + 1 layers is defined as GHl+1(B) = { g(x) = ReLU ( H∑ h=1 vhgh(x) ) : H∑ h=1 |vh| ≤ B, gh ∈ GHl (B) } . Combining with the last sigmoid layer, we obtain the following discriminator class, FHL (κ, τ,B) = { D(x) = sigmoid ∑ j≥1 wjsigmoid ( 2p∑ h=1 ujhgjh(x) + bj ) : ∑ j≥1 |wj | ≤ κ, 2p∑ h=1 u2jh ≤ 2, |bj | ≤ τ, gjh ∈ GHL−1(B) } . Note that all the activation functions are ReLU(·) except that we use sigmoid(·) in the last layer in the feature map g(·). A theoretical guarantees of the class defined above is given by the following theorem. Theorem A.1. Assume p log pn ∨ 2 ≤ c for some sufficiently small constant c > 0. Consider i.i.d. observations X1, ..., Xn ∼ (1− )N(θ, Ip) + Q and the estimator θ̂ defined by (14) with D = FHL (κ, τ,B) with H ≥ 2p, 2 ≤ L = O(1), 2 ≤ B = O(1), and τ = √ p log p. We set κ = O (√ p log p n + ) . Then, for the estimator θ̂ defined by (14) with D = FHL (κ, τ,B), we have ‖θ̂ − θ‖2 ≤ C ( p log p n ∨ 2 ) , with probability at least 1− e−C′(p log p+n 2) uniformly over all θ ∈ Rp such that ‖θ‖∞ ≤ √ log p and all Q. The theorem shows that JS-GAN with a deep ReLU network can achieve the error rate p log pn ∨ 2 with respect to the squared `2 loss. The condition ‖θ‖∞ ≤ √ log p for the ReLU network can be easily satisfied with a simple preprocessing step. We split the data into two halves, whose sizes are log n and n− log n, respectively. Then, we calculate the coordinatewise median θ̃ using the small half. It is easy to show that ‖θ̃ − θ‖∞ ≤ √ log p logn ∨ with high probability. Then, for each Xi from the second half, the conditional distribution of Xi − θ̃ given the first half is (1− )N(θ− θ̃, Ip)+ Q̃. Since √ log p logn ∨ ≤ √ log p, the condition ‖θ− θ̃‖∞ ≤ √ log p is satisfied, and thus we can apply the estimator (14) using the shifted data Xi − θ̃ from the second half. The theoretical guarantee of Theorem A.1 will be ‖θ̂ − (θ − θ̃)‖2 ≤ C ( p log p n ∨ 2 ) , with high probability. Hence, we can use θ̂ + θ̃ as the final estimator to achieve the same rate in Theorem A.1. On the other hand, our experiments show that this preprocessing step is not needed. We believe that the assumption ‖θ‖∞ ≤ √ log p is a technical artifact in the analysis of the Rademacher complexity. It can probably be dropped by a more careful analysis. B DETAILS OF EXPERIMENTS B.1 TRAINING DETAILS The implementation for JS-GAN is given in Algorithm 1, and a simple modification of the objective function leads to that of TV-GAN. Algorithm 1 JS-GAN: argminη maxw[ 1n ∑n i=1 logDw(Xi) + E log(1−Dw(Gη(Z)))] Input: Observation set S = {X1, . . . , Xn} ∈ Rp, discriminator network Dw(x), generator network Gη(z) = z+η, learning rates γd and γg for the discriminator and the generator, batch sizem, discriminator steps in each iteration K, total epochs T , average epochs T0. Initialization: Initialize η with coordinatewise median of S. Initialize w withN(0, .05) independently on each element or Xavier (Glorot & Bengio, 2010). 1: for t = 1, . . . , T do 2: for k = 1, . . . ,K do 3: Sample mini-batch {X1, . . . , Xm} from S. Sample {Z1, . . . , Zm} from N(0, Ip) 4: gw ← ∇w[ 1mΣ m i=1 logDw(Xi) + 1 mΣ m i=1 log(1−Dw(Gη(Zi)))] 5: w ← w + γdgw 6: end for 7: Sample {Z1, . . . , Zm} from N(0, Ip) 8: gη ← ∇η[ 1mΣ m i=1 log(1−Dw(Gη(Zi)))] 9: η ← η − γggη 10: end for Return: The average estimate η over the last T0 epochs. Several important implementation details are discussed below. • How to tune parameters? The choice of learning rates is crucial to the convergence rate, but the minimax game is hard to evaluate. We propose a simple strategy to tune hyper-parameters including the learning rates. Suppose we have estimators θ̂1, . . . , θ̂M with corresponding discriminator networks Dŵ1 ,. . . , DŵM . Fixing η = θ̂, we further apply gradient descent to Dw with a few more epochs (but not many in order to prevent overfitting, for example 10 epochs) and select the θ̂ with the smallest value of the objective function (14) (JS-GAN) or (12) (TV-GAN). We note that training discriminator and generator alternatively usually will not suffer from overfitting since the objective function for either the discriminator or the generator is always changing. However, we must be careful about the overfitting issue when training the discriminator alone with a fixed η, and that is why we apply an early stopping strategy here. Fortunately, the experiments show if the structures of networks are same (then of course, the dimensions of the inputs are same), the choices of hyper-parameters are robust to different models and we present the critical parameters in Table 5 to reproduce the experiment results in Table 1 and Table 2. • When to stop training? Judging convergence is a difficult task in GAN trainings, since sometimes oscillation may occur. In computer vision, people often use a task related measure and stop training once the requirement based on the measure is achieved. In our experiments below, we simply use a sufficiently large T which works well, but it is still interesting to explore an efficient early stopping rule in the future work. • How to design the network structure? Although Theorem 3.1 and Theorem 3.2 guarantee the minimax rates of TV-GAN without hidden layer and JS-GAN with one hidden layer, one may wonder whether deeper network structures will perform better. From our preliminary experiments, TV-GAN with one hidden layer is significantly better than TV-GAN without any hidden layer. Moreover, JS-GAN 200-200-100-1 JS 0.005 0.1 2 200 25 0 B.2 SETTINGS OF CONTAMINATION Q We introduce the contamination distributions Q used in the experiments. We first consider Q = N(µ, Ip) with µ ranges in {0.2, 0.5, 1, 5}. Note that the total variation distance between N(0p, Ip) and N(µ, Ip) is of order ‖0p − µ‖ = ‖µ‖. We hope to use different levels of ‖µ‖ to test the algorithm and verify the error rate in the worst case. Second, we consider Q = N(1.5 ∗ 1p,Σ) to be a Gaussian distribution with a non-trivial covariance matrix Σ. The covariance matrix is generated according to the following steps. First generate a sparse precision matrix Γ = (γij) with each entry γij = zij ∗ τij , i ≤ j, where zij and τij are independently generated from Uniform(0.4, 0.8) and Bernoulli(0.1). We then define γij = γji for all i > j and Γ̄ = Γ + (|min eig(Γ)| + 0.05)Ip to make the precision matrix symmetric and positive definite, where min eig(Γ) is the smallest eigenvalue of Γ. The covariance matrix is Σ = Γ̄−1. Finally, we consider Q to be a Cauchy distribution with independent component, and the jth component takes a standard Cauchy distribution with location parameter τj = 0.5. B.3 COMPARISON DETAILS In Section 5.3, we compare GANs with the dimension halving (Lai et al., 2016) and iterative filtering (Diakonikolas et al., 2017). • Dimension Halving. Experiments conducted are based on the code from https://github.com/ kal2000/AgnosticMeanAndCovarianceCode. The only hyper-parameter is the threshold in the outlier removal step, and we take C = 2 as suggested in the file outRemSperical.m. • Iterative Filtering. Experiments conducted are based on the code from https://github.com/ hoonose/robust-filter. We assume is known and take other hyper-parameters as suggested in the file filterGaussianMean.m. B.4 SUPPLEMENTARY EXPERIMENTS FOR NETWORK STRUCTURES The experiments are conducted with i.i.d. observations drawn from (1− )N(0p, Ip) + N(0.5 ∗ 1p, Ip) with = 0.2. Table 6 summarizes results for p = 100, n ∈ {5000, 50000} and various network structures. We observe that TV-GAN that uses neural nets with one hidden layer improves over the performance of that without any hidden layer. This indicates that the landscape of TV-GAN might be improved by a more complicated network structure. However, adding one more layer does not improve the results. For JS-GAN, we omit the results without hidden layer because of its lack of robustness (Proposition 3.1). Deeper networks sometimes improve over shallow networks, but this is not always true. We also observe that the optimal choice of the width of the hidden layer depends on the sample size. B.5 TABLES FOR TESTING THE MINIMAX RATES Tables 7, 8 and 9 show numerical results corresponding to Figure 2. C PROOFS OF PROPOSITION 2.1 AND PROPOSITION 3.1 In the first example, consider Q = {N(η, Ip) : η ∈ Rp} , Q̃η = {N(η̃, Ip) : ‖η̃ − η‖ ≤ r} . In other words,Q is the class of Gaussian location family, and Q̃η is taken to be a subset in a local neighborhood of N(η, Ip). Then, with Q = N(η, Ip) and Q̃ = N(η̃, Ip), the event q̃(X)/q(X) ≥ 1 is equivalent to ‖X − η̃‖2 ≤ ‖X − η‖2. Since ‖η̃ − η‖ ≤ r, we can write η̃ = η + r̃u for some r̃ ∈ R and u ∈ Rp that satisfy 0 ≤ r̃ ≤ r and ‖u‖ = 1. Then, (8) becomes θ̂ = argmin η∈Rp sup ‖u‖=1 0≤r̃≤r [ 1 n n∑ i=1 I { uT (Xi − η) ≥ r̃ 2 } − P ( N(0, 1) ≥ r̃ 2 )] . (18) Letting r → 0, we obtain (9), the exact formula of Tukey’s median. The next example is a linear model y|X ∼ N(XT θ, 1). Consider the following classes Q = { Py,X = Py|XPX : Py|X = N(X T η, 1), η ∈ Rp } , Q̃η = { Py,X = Py|XPX : Py|X = N(X T η̃, 1), ‖η̃ − η‖ ≤ r } . Here, Py,X stands for the joint distribution of y and X . The two classes Q and Q̃ share the same marginal distribution PX and the conditional distributions are specified by N(XT η, 1) and N(XT η̃, 1), respectively. Follow the same derivation of Tukey’s median, let r → 0, and we obtain the exact formula of regression depth (10). It is worth noting that the derivation of (10) does not depend on the marginal distribution PX . The last example is on covariance/scatter matrix estimation. For this task, we set Q = {N(0,Γ) : Γ ∈ Ep}, where Ep is the class of all p×p covariance matrices. Inspired by the derivations of Tukey depth and regression depth, it is tempting to choose Q̃ in the neighborhood of N(0,Γ). However, a native choice would lead to a definition that is not even Fisher consistent. We propose a rank-one neighborhood, given by Q̃Γ = { N(0, Γ̃) : Γ̃−1 = Γ−1 + r̃uuT ∈ Ep, |r̃| ≤ r, ‖u‖ = 1 } . (19) Then, a direct calculation gives I { dN(0, Γ̃) dN(0,Γ) (X) ≥ 1 } = I { r̃|uTX|2 ≤ log(1 + r̃uTΓu) } . (20) Since limr̃→0 log(1+r̃uTΓu) r̃uTΓu = 1, the limiting event of (20) is either I{|uTX|2 ≤ uTΓu} or I{|uTX|2 ≥ uTΓu}, depending on whether r̃ tends to zero from left or from right. Therefore, with the aboveQ and Q̃Γ, (8) becomes (11) under the limit r → 0. Even though the definition of (19) is given by a rank-one neighborhood of the inverse covariance matrix, the formula (11) can also be derived with Γ̃−1 = Γ−1 + r̃uuT in (19) replaced by Γ̃ = Γ + r̃uuT by applying the Sherman-Morrison formula. A similar formula to (11) in the literature is given by Σ̂ = argmax Γ∈Ep inf ‖u‖=1 [ 1 n n∑ i=1 I{|uTXi|2 ≤ βuTΓu} ∧ 1 n n∑ i=1 I{|uTXi|2 ≥ βuTΓu} ] , (21) which is recognized as the maximizer of what is known as the matrix depth function (Zhang, 2002; Chen et al., 2018; Paindaveine & Van Bever, 2017). The β in (21) is a scalar defined through the equation P(N(0, 1) ≤ √ β) = 3/4. It is proved in Chen et al. (2018) that Σ̂ achieves the minimax rate under Huber’s -contamination model. While the formula (11) can be derived from TV-Learning with discriminators in the form of I { dN(0,Γ̃) dN(0,Γ) (X) ≥ 1 } , a special case of (6), the formula (21) can be derived directly from TV- GAN with discriminators in the form of I { dN(0,βΓ̃) dN(0,βΓ) (X) ≥ 1 } by following a similar rank-one neighborhood argument. This completes the derivation of Proposition 2.1. To prove Proposition 3.1, we define F (w) = EP log sigmoid(wT g(X)) + EQ log(1 − sigmoid(wT g(X))) + log 4, so that JSg(P,Q) = maxw∈W F (w). The gradient and Hessian of F (w) are given by ∇F (w) = EP e−w T g(X) 1 + e−wT g(X) g(X)− EQ ew T g(X) 1 + ewT g(X) g(X), ∇2F (w) = −EP ew T g(X) (1 + ewT g(X))2 g(X)g(X)T − EQ e−w T g(X) (1 + e−wT g(X))2 g(X)g(X)T . Therefore, F (w) is concave in w, and maxw∈W F (w) is a convex optimization with a convex W . Suppose JSg(P,Q) = 0. Then maxw∈W F (w) = 0 = F (0), which implies ∇F (0) = 0, and thus we have EP g(X) = EQg(X). Now suppose EP g(X) = EQg(X), which is equivalent to ∇F (0) = 0. Therefore, w = 0 is a stationary point of a concave function, and we have JSg(P,Q) = maxw∈W F (w) = F (0) = 0. D PROOFS OF MAIN RESULTS In this section, we present proofs of all main theorems in the paper. We first establish some useful lemmas in Section D.1, and the the proofs of main theorems will be given in Section D.2. D.1 SOME AUXILIARY LEMMAS Lemma D.1. Given i.i.d. observations X1, ..., Xn ∼ P and the function class D defined in (13), we have for any δ > 0, sup D∈D ∣∣∣∣∣ 1n n∑ i=1 D(Xi)− ED(X) ∣∣∣∣∣ ≤ C (√ p n + √ log(1/δ) n ) , with probability at least 1− δ for some universal constant C > 0. Proof. Let f(X1, ..., Xn) = supD∈D ∣∣ 1 n ∑n i=1D(Xi)− ED(X) ∣∣. It is clear that f(X1, ..., Xn) satisfies the bounded difference condition. By McDiarmid’s inequality (McDiarmid, 1989), we have f(X1, ..., Xn) ≤ Ef(X1, ..., Xn) + √ log(1/δ) 2n , with probability at least 1 − δ. Using a standard symmetrization technique (Pollard, 2012), we obtain the following bound that involves Rademacher complexity, Ef(X1, ..., Xn) ≤ 2E sup D∈D ∣∣∣∣∣ 1n n∑ i=1 iD(Xi) ∣∣∣∣∣ , (22) where 1, ..., n are independent Rademacher random variables. The Rademacher complexity can be bounded by Dudley’s integral entropy bound, which gives E sup D∈D ∣∣∣∣∣ 1n n∑ i=1 iD(Xi) ∣∣∣∣∣ . E 1√n ∫ 2 0 √ logN (δ,D, ‖ · ‖n)dδ, where N (δ,D, ‖ · ‖n) is the δ-covering number of D with respect to the empirical `2 distance ‖f − g‖n = √ 1 n ∑n i=1(f(Xi)− g(Xi))2. Since the VC-dimension of D is O(p), we have N (δ,D, ‖ · ‖n) . p (16e/δ)O(p) (see Theorem 2.6.7 of Van Der Vaart & Wellner (1996)). This leads to the bound 1√ n ∫ 2 0 √ logN (δ,D, ‖ · ‖n)dδ . √ p n , which gives the desired result. Lemma D.2. Given i.i.d. observations X1, ..., Xn ∼ P, and the function class D defined in (15), we have for any δ > 0, sup D∈D ∣∣∣∣∣ 1n n∑ i=1 logD(Xi)− E logD(X) ∣∣∣∣∣ ≤ Cκ (√ p n + √ log(1/δ) n ) , with probability at least 1− δ for some universal constant C > 0. Proof. Let f(X1, ..., Xn) = supD∈D ∣∣ 1 n ∑n i=1 logD(Xi)− E logD(X) ∣∣. Since sup D∈D sup x | log(2D(x))| ≤ κ, we have sup x1,...,xn,x′i |f(x1, ..., xn)− f(x1, ..., xi−1, x′i, xi+1, ..., xn)| ≤ 2κ n . Therefore, by McDiarmid’s inequality (McDiarmid, 1989), we have f(X1, ..., Xn) ≤ Ef(X1, ..., Xn) + κ √ 2 log(1/δ) n , (23) with probability at least 1−δ. By the same argument of (22), it is sufficient to bound the Rademacher complexity E supD∈D ∣∣ 1 n ∑n i=1 i log(2D(Xi)) ∣∣. Since the function ψ(x) = log(2sigmoid(x)) has Lipschitz constant 1 and satisfies ψ(0) = 0, we have E sup D∈D ∣∣∣∣∣ 1n n∑ i=1 i log(2D(Xi)) ∣∣∣∣∣ ≤ 2E sup∑ j≥1 |wj |≤κ,uj∈Rp,bj∈R ∣∣∣∣∣∣ 1n n∑ i=1 i ∑ j≥1 wjσ(u T j Xi + bj) ∣∣∣∣∣∣ , which uses Theorem 12 of Bartlett & Mendelson (2002). By Hölder’s inequality, we further have E sup∑ j≥1 |wj |≤κ,uj∈Rp,bj∈R ∣∣∣∣∣∣ 1n n∑ i=1 i ∑ j≥1 wjσ(u T j Xi + bj) ∣∣∣∣∣∣ ≤ κEmax j≥1 sup uj∈Rp,bj∈R ∣∣∣∣∣ 1n n∑ i=1 iσ(u T j Xi + bj) ∣∣∣∣∣ = κE sup u∈Rp,b∈R ∣∣∣∣∣ 1n n∑ i=1 iσ(u TXi + b) ∣∣∣∣∣ . Note that for a monotone function σ : R→ [0, 1], the VC-dimension of the class {σ(uTx+ b) : u ∈ R, b ∈ R} is O(p). Therefore, by using the same argument of Dudley’s integral entropy bound in the proof Lemma D.1, we have E sup u∈Rp,b∈R ∣∣∣∣∣ 1n n∑ i=1 iσ(u TXi + b) ∣∣∣∣∣ . √ p n , which leads to the desired result. Lemma D.3. Given i.i.d. observations X1, .., Xn ∼ N(θ, Ip) and the function class FHL (κ, τ,B). Assume ‖θ‖∞ ≤ √ log p and set τ = √ p log p. We have for any δ > 0, sup D∈FHL (κ,τ,B) ∣∣∣∣∣ 1n n∑ i=1 logD(Xi)− E logD(X) ∣∣∣∣∣ ≤ Cκ ( (2B)L−1 √ p log p n + √ log(1/δ) n ) , with probability at least 1− δ for some universal constants C > 0. Proof. Write f(X1, ..., Xn) = supD∈FHL (κ,τ,B) ∣∣ 1 n ∑n i=1 logD(Xi)− E logD(X) ∣∣. Then, the inequality (23) holds with probability at least 1 − δ. It is sufficient to analyze the Rademacher complexity. Using the fact that the function log(2sigmoid(x)) is Lipschitz and Hölder’s inequality, we have E sup D∈FHL (κ,τ,B) ∣∣∣∣∣ 1n n∑ i=1 i log(2D(Xi)) ∣∣∣∣∣ ≤ 2E sup ‖w‖1≤κ,‖uj∗‖2≤2,|bj |≤τ,gjh∈GHL−1(B) ∣∣∣∣∣∣ 1n n∑ i=1 i ∑ j≥1 wjsigmoid ( 2p∑ h=1 ujhgjh(Xi) + bj )∣∣∣∣∣∣ ≤ 2κE sup ‖u‖2≤2,|b|≤τ,gh∈GHL−1(B) ∣∣∣∣∣ 1n n∑ i=1 isigmoid ( 2p∑ h=1 uhgh(Xi) + b )∣∣∣∣∣ ≤ 4κE sup ‖u‖2≤2,|b|≤τ,gh∈GHL−1(B) ∣∣∣∣∣ 1n n∑ i=1 i ( 2p∑ h=1 uhgh(Xi) + b )∣∣∣∣∣ ≤ 8√pκE sup g∈GHL−1(B) ∣∣∣∣∣ 1n n∑ i=1 ig(Xi) ∣∣∣∣∣+ 4κτE ∣∣∣∣∣ 1n n∑ i=1 i ∣∣∣∣∣ . Now we use the notation Zi = Xi − θ ∼ N(0, Ip) for i = 1, ..., n. We bound E supg∈GHL−1(B) ∣∣ 1 n ∑n i=1 ig(Zi + θ) ∣∣ by induction. Since E ( sup g∈GH1 (B) 1 n n∑ i=1 ig(Zi + θ) ) ≤ E ( sup ‖v‖1≤B 1 n n∑ i=1 iv T (Zi + θ) ) ≤ B ( E ∣∣∣∣∣ 1n n∑ i=1 iZi ∣∣∣∣∣ ∞ + ‖θ‖∞E ∣∣∣∣∣ 1n n∑ i=1 i ∣∣∣∣∣ ) ≤ CB √ log p+ ‖θ‖∞√ n , and E ( sup g∈GHl+1(B) 1 n n∑ i=1 ig(Zi + θ) ) ≤ E ( sup ‖v‖1≤B,gh∈GHl (B) 1 n n∑ i=1 i H∑ h=1 vhgh(Zi + θ) ) ≤ BE ( sup g∈GHl (B) ∣∣∣∣∣ 1n n∑ i=1 ig(Zi + θ) ∣∣∣∣∣ ) ≤ 2BE ( sup g∈GHl (B) 1 n n∑ i=1 ig(Zi + θ) ) , we have E ( sup g∈GHL−1(B) 1 n n∑ i=1 ig(Zi + θ) ) ≤ C(2B)L−1 √ log p+ ‖θ‖∞√ n . Combining the above inequalities, we get E ( sup D∈FHL (κ,τ,B) 1 n n∑ i=1 i logD(Zi + θ) ) ≤ Cκ ( √ p(2B)L−1 √ log p+ ‖θ‖∞√ n + τ√ n ) . This leads to the desired result under the conditions on τ and ‖θ‖∞. D.2 PROOFS OF MAIN THEOREMS Proof of Theorem 3.1. We first introduce some notations. Define F (P, η) = maxw,b Fw,b(P, η), where Fw,b(P, η) = EP sigmoid(w TX + b)− EN(η,Ip)sigmoid(w TX + b). With this definition, we have θ̂ = argminη F (Pn, η), where we use Pn for the empirical distribution 1 n ∑n i=1 δXi . We shorthand N(η, Ip) by Pη , and then F (Pθ, θ̂) ≤ F ((1− )Pθ + Q, θ̂) + (24) ≤ F (Pn, θ̂) + + C (√ p n + √ log(1/δ) n ) (25) ≤ F (Pn, θ) + + C (√ p n + √ log(1/δ) n ) (26) ≤ F ((1− )Pθ + Q, θ) + + 2C (√ p n + √ log(1/δ) n ) (27) ≤ F (Pθ, θ) + 2 + 2C (√ p n + √ log(1/δ) n ) (28) = 2 + 2C (√ p n + √ log(1/δ) n ) . (29) With probability at least 1− δ, the above inequalities hold. We will explain each inequality. Since F ((1− )Pθ + Q, η) = max w,b [(1− )Fw,b(Pθ, η) + Fw,b(Q, η)] , we have sup η |F ((1− )Pθ + Q, η)− F (Pθ, η)| ≤ , which implies (24) and (28). The inequalities (25) and (27) are implied by Lemma D.1 and the fact that sup η |F (Pn, η)− F ((1− )Pθ + Q, η)| ≤ sup w,b ∣∣∣∣∣ 1n n∑ i=1 sigmoid(wTXi + b)− Esigmoid(wTX + b) ∣∣∣∣∣ . The inequality (26) is a direct consequence of the definition of θ̂. Finally, it is easy to see that F (Pθ, θ) = 0, which gives (29). In summary, we have derived that with probability at least 1− δ, Fw,b(Pθ, θ̂) ≤ 2 + 2C (√ p n + √ log(1/δ) n ) , for all w ∈ Rp and b ∈ R. For any u ∈ Rp such that ‖u‖ = 1, we take w = u and b = −uT θ, and we have f(0)− f(uT (θ − θ̂)) ≤ 2 + 2C (√ p n + √ log(1/δ) n ) , where f(t) = ∫ 1 1+ez+tφ(z)dz, with φ(·) being the probability density function of N(0, 1). It is not hard to see that as long as |f(t)− f(0)| ≤ c for some sufficiently small constant c > 0, then |f(t)− f(0)| ≥ c′|t| for some constant c′ > 0. This implies ‖θ̂ − θ‖ = sup ‖u‖=1 |uT (θ̂ − θ)| ≤ 1 c′ sup ‖u‖=1 ∣∣∣f(0)− f(uT (θ − θ̂))∣∣∣ . + √ p n + √ log(1/δ) n , with probability at least 1− δ. The proof is complete. Proof of Theorem 3.2. We continue to use Pη to denote N(η, Ip). Define F (P, η) = max ‖w‖1≤κ,u,b Fw,u,b(P, η), where Fw,u,b(P, η) = EP logD(X) + EN(η,Ip) log (1−D(X)) + log 4, with D(x) = sigmoid (∑ j≥1 wjσ(u T j x+ bj) ) . Then, F (Pθ, θ̂) ≤ F ((1− )Pθ + Q, θ̂) + 2κ (30) ≤ F (Pn, θ̂) + 2κ + Cκ (√ p n + √ log(1/δ) n ) (31) ≤ F (Pn, θ) + 2κ + Cκ (√ p n + √ log(1/δ) n ) (32) ≤ F ((1− )Pθ + Q, θ) + 2κ + 2Cκ (√ p n + √ log(1/δ) n ) (33) ≤ F (Pθ, θ) + 4κ + 2Cκ (√ p n + √ log(1/δ) n ) (34) = 4κ + 2Cκ (√ p n + √ log(1/δ) n ) . The inequalities (30)-(34) follow similar arguments for (24)-(28). To be specific, (31) and (33) are implied by Lemma D.2, and (32) is a direct consequence of the definition of θ̂. To see (30) and (34), note that for any w such that ‖w‖1 ≤ κ, we have | log(2D(X))| ≤ ∣∣∣∣∣∣ ∑ j≥1 wjσ(u T j X + bj) ∣∣∣∣∣∣ ≤ κ. A similar argument gives the same bound for | log(2(1−D(X)))|. This leads to sup η |F ((1− )Pθ + Q, η)− F (Pθ, η)| ≤ 2κ , (35) which further implies (30) and (34). To summarize, we have derived that with probability at least 1− δ, Fw,u,b(Pθ, θ̂) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) , for all ‖w‖1 ≤ κ, ‖uj‖ ≤ 1 and bj . Take w1 = κ, wj = 0 for all j > 1, u1 = u for some unit vector u and b1 = −uT θ, and we get fuT (θ̂−θ)(κ) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) , (36) where fδ(t) = E log 2 1 + e−tσ(Z) + E log 2 1 + etσ(Z+δ) , (37) with Z ∼ N(0, 1). Direct calculations give f ′δ(t) = E e−tσ(Z) 1 + e−tσ(Z) σ(Z)− E e tσ(Z+δ) 1 + etσ(Z+δ) σ(Z + δ), f ′′δ (t) = −Eσ(Z)2 e−tσ(Z) (1 + e−tσ(Z))2 − Eσ(Z + δ)2 e tσ(Z+δ) (1 + etσ(Z+δ))2 . (38) Therefore, fδ(0) = 0, f ′δ(0) = 1 2 (Eσ(Z)− Eσ(Z + δ)), and f ′′ δ (t) ≥ − 12 . By the inequality fδ(κ) ≥ fδ(0) + κf ′δ(0)− 1 4 κ2, we have κf ′δ(0) ≤ fδ(κ) + κ2/4. In view of (36), we have κ 2 (∫ σ(z)φ(z)dz − ∫ σ(z + uT (θ̂ − θ))φ(z)dz ) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) + κ2 4 . It is easy to see that for the choices of σ(·), ∫ σ(z)φ(z)dz − ∫ σ(z + t)φ(z)dz is locally linear with respect to t. This implies that κ‖θ̂ − θ‖ = κ sup ‖u‖=1 uT (θ̂ − θ) . κ ( + √ p n + √ log(1/δ) n ) + κ2. Therefore, with a κ . √ p n + , the proof is complete. Proof of Theorem 4.1. We use Pθ,Σ,h to denote the elliptical distribution EC(θ,Σ, h). Define F (P, (η,Γ, g)) = max ‖w‖1≤κ,u,b Fw,u,b(P, (η,Γ, g)), where Fw,u,b(P, (η,Γ, g)) = EP logD(X) + EEC(η,Γ,g) log (1−D(X)) + log 4, with D(x) = sigmoid (∑ j≥1 wjσ(u T j x+ bj) ) . Let P be the data generating process that satisfies TV(P, Pθ,Σ,h) ≤ , and then there exist probability distributions Q1 and Q2, such that P + Q1 = Pθ,Σ,h + Q2. The explicit construction of Q1, Q2 is given in the proof of Theorem 5.1 of Chen et al. (2018). This implies that |F (P, (η,Γ, g))− F (Pθ,Σ,h, (η,Γ, g))| ≤ sup ‖w‖1≤κ,u,b |Fw,u,b(P, (η,Γ, g))− Fw,u,b(Pθ,Σ,h, (η,Γ, g))| = sup ‖w‖1≤κ,u,b |EQ2 log(2D(X))− EQ1 log(2D(X))| ≤ 2κ . (39) Then, the same argument in Theorem 3.2 (with (35) replaced by (39)) leads to the fact that with probability at least 1− δ, Fw,u,b(Pθ,Σ,h, (θ̂, Σ̂, ĥ)) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) , for all ‖w‖1 ≤ κ, ‖uj‖ ≤ 1 and bj . Take w1 = κ, wj = 0 for all j > 1, u1 = u/ √ uT Σ̂u for some unit vector u and b1 = −uT θ/ √ uT Σ̂u, and we get fuT (θ̂−θ)√ uT Σ̂u (κ) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) , where fδ(t) = ∫ log ( 2 1 + e−tσ(∆s) ) h(s)ds+ ∫ log ( 2 1 + etσ(δ+s) ) ĥ(s)ds, where δ = u T (θ̂−θ)√ uT Σ̂u and ∆ = √ uTΣu√ uT Σ̂u . A similar argument to the proof of Theorem 3.2 gives κ 2 (∫ σ(∆s)h(s)ds− ∫ σ(δ + s)ĥ(s)ds ) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) + κ2 4 . Since ∫ σ(∆s)h(s)ds = 1 2 = ∫ σ(s)ĥ(s)ds, the above bound is equivalent to κ 2 (H(0)−H(δ)) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) + κ2 4 , where H(δ) = ∫ σ(δ+ s)ĥ(s)ds. The above bound also holds for κ2 (H(δ)−H(0)) by a symmetric argument, and therefore the same bound holds for κ2 |H(δ)−H(0)|. Since H ′(0) = ∫ σ(s)(1− σ(s))ĥ(s)ds = 1, H(δ) is locally linear at δ = 0, which leads to a desired bound for δ = u T (θ̂−θ)√ uT Σ̂u . Finally, since uT Σ̂u ≤ M , we get the bound for uT (θ̂ − θ). The proof is complete by taking supreme over all unit vector u. Proof of Theorem A.1. We continue to use Pη to denote N(η, Ip). Define F (P, η) = sup D∈FHL (κ,τ,B) FD(P, η), with FD(P, η) = EP logD(X) + EN(η,Ip) log(1−D(X)) + log 4. Follow the same argument in the proof of Theorem 3.2, use Lemma D.3, and we have FD(Pθ, θ̂) ≤ Cκ ( + (2B)L−1 √ p log p n + √ log(1/δ) n ) , uniformly over D ∈ FHL (κ, τ,B) with probability at least 1 − δ. Choose w1 = κ and wj = 0 for all wj > 1. For any unit vector ũ ∈ Rp, take u1h = −u1(h+p) = ũh for h = 1, ..., p and b1 = −ũT θ. For h = 1, ..., p, set g1h(x) = max(xh, 0). For h = p + 1, ..., 2p, set g1h(x) = max(−xh−p, 0). It is obvious that such u and b satisfy ∑ h u 2 1h ≤ 2 and |b1| ≤ ‖θ‖ ≤ √ p‖θ‖∞ ≤ √ p log p. We need to show both the functions max(x, 0) and max(−x, 0) are elements of GHL−1(B). This can be proved by induction. It is obvious that max(xh, 0),max(−xh, 0) ∈ GH1 (B) for any h = 1, ..., p. Suppose we have max(xh, 0),max(−xh, 0) ∈ GHl (B) for any h = 1, ..., p. Then, max (max(xh, 0)−max(−xh, 0), 0) = max(xh, 0), max (max(−xh, 0)−max(xh, 0), 0) = max(−xh, 0). Therefore, max(xh, 0),max(−
1. What is the focus of the paper in terms of the robust estimation problem? 2. What are the main approaches in solving the problem, and how does the paper differ from them? 3. What is the interesting connection built by the paper, and how does it help in achieving the optimal rates? 4. Do you have any concerns or questions regarding the applicability of the approach in more general settings? 5. How do you assess the overall quality and contribution of the paper?
Review
Review This paper considers the robust estimation problem under Huber’s \epsilon-contamination model. This problem is a hot topic in theoretical statistics and theoretical computer science community in recent 3 years. From theoretical statistics community, the main approach is through depth functions. Solving the robust estimation problem can be reduced to solving a min-max problem. While the formulation is clean and can achieve the optimal statistical rate, solving the min-max problem is computationally intractable in general. On the other hand, approaches from TCS community are more involved and sometimes cannot achieve the optimal statistical rate (especially for the general distribution). This paper tries to make the approach from theoretical statistical community computationally tractable. This paper builds an interesting connection between f-GAN and depth functions. Importantly, authors show that by carefully choosing the discriminators neural network architecture and constraining the norms of the weight matrices, the generator achieves the optimal rates. This is an interesting theoretical discovery. My major question is whether this approach can be used to solve robust estimation problems in more general settings. For example, we want to do robust mean estimation problem and the only assumption on P is it is sub-Gaussian. Is it possible to design a generator-discriminator pair to solve this problem? Theorems in this paper only focus on the Gaussian case. Overall, I like this paper. This paper provides a new angle toward a classical statistical problem. The computational issue has not been resolved yet. However, given recent progress from optimization in deep learning, it is quite possible that the optimization problem in this paper can be solved (approximately). Therefore, I recommend accepting.
ICLR
Title ROBUST ESTIMATION VIA GENERATIVE ADVERSARIAL NETWORKS Abstract Robust estimation under Huber’s -contamination model has become an important topic in statistics and theoretical computer science. Statistically optimal procedures such as Tukey’s median and other estimators based on depth functions are impractical because of their computational intractability. In this paper, we establish an intriguing connection between f -GANs and various depth functions through the lens of f -Learning. Similar to the derivation of f GANs, we show that these depth functions that lead to statistically optimal robust estimators can all be viewed as variational lower bounds of the total variation distance in the framework of f -Learning. This connection opens the door of computing robust estimators using tools developed for training GANs. In particular, we show in both theory and experiments that some appropriate structures of discriminator networks with hidden layers in GANs lead to statistically optimal robust location estimators for both Gaussian distribution and general elliptical distributions where first moment may not exist. 1 INTRODUCTION In the setting of Huber’s -contamination model (Huber, 1964; 1965), one has i.i.d observations X1, ..., Xn ∼ (1− )Pθ + Q, (1) and the goal is to estimate the model parameter θ. Under the data generating process (1), each observation has a 1 − probability to be drawn from Pθ and the other probability to be drawn from the contamination distributionQ. The presence of an unknown contamination distribution poses both statistical and computational challenges to the problem. For example, consider a normal mean estimation problem with Pθ = N(θ, Ip). Due to the contamination of data, the sample average, which is optimal when = 0, can be arbitrarily far away from the true mean if Q charges a positive probability at infinity. Moreover, even robust estimators such as coordinatewise median and geometric median are proved to be suboptimal under the setting of (1) (Chen et al., 2018; Diakonikolas et al., 2016a; Lai et al., 2016). The search for both statistically optimal and computationally feasible procedures has become a fundamental problem in areas including statistics and computer science. For the normal mean estimation problem, it has been shown in Chen et al. (2018) that the minimax rate with respect to the squared `2 loss is pn ∨ 2, and is achieved by Tukey’s median (Tukey, 1975). Despite the statistical optimality of Tukey’s median, its computation is not tractable. In fact, even an approximate algorithm takes O(eCp) in time (Amenta et al., 2000; Chan, 2004; Rousseeuw & Struyf, 1998). Recent developments in theoretical computer science are focused on the search of computationally tractable algorithms for estimating θ under Huber’s -contamination model (1). The success of the efforts started from two fundamental papers Diakonikolas et al. (2016a); Lai et al. (2016), where two different but related computational strategies “iterative filtering” and “dimension halving” were proposed to robustly estimate the normal mean. These algorithms can provably achieve the minimax rate pn ∨ 2 up to a poly-logarithmic factor in polynomial time. The main idea behind the two methods is a critical fact that a good robust moment estimator can be certified efficiently by higher moments. This idea was later further extended (Diakonikolas et al., 2017; Du et al., 2017; Diakonikolas et al., 2016b; 2018a;c;b; Kothari et al., 2018) to develop robust and computable procedures for various other problems. However, many of the computationally feasible procedures for robust mean estimation in the literature rely on the knowledge of covariance matrix and sometimes the knowledge of contamination proportion. Even though these assumptions can be relaxed, nontrivial modifications of the algorithms are required for such extensions and statistical error rates may also be affected. Compared with these computationally feasible procedures proposed in the recent literature for robust estimation, Tukey’s median (9) and other depth-based estimators (Rousseeuw & Hubert, 1999; Mizera, 2002; Zhang, 2002; Mizera & Müller, 2004; Paindaveine & Van Bever, 2017) have some indispensable advantages in terms of their statistical properties. First, the depth-based estimators have clear objective functions that can be interpreted from the perspective of projection pursuit (Mizera, 2002). Second, the depth-based procedures are adaptive to unknown nuisance parameters in the models such as covariance structures, contamination proportion, and error distributions (Chen et al., 2018; Gao, 2017). Last but not least, Tukey’s depth and other depth functions are mostly designed for robust quantile estimation, while the recent advancements in the theoretical computer science literature are all focused on robust moments estimation. Although this is not an issue when it comes to normal mean estimation, the difference is fundamental for robust estimation under general settings such as elliptical distributions where moments do not necessarily exist. Given the desirable statistical properties discussed above, this paper is focused on the development of computational strategies of depth-like procedures. Our key observation is that robust estimators that are maximizers of depth functions, including halfspace depth, regression depth and covariance matrix depth, can all be derived under the framework of f -GAN (Nowozin et al., 2016). As a result, these depth-based estimators can be viewed as minimizers of variational lower bounds of the total variation distance between the empirical measure and the model distribution (Proposition 2.1). This observation allows us to leverage the recent developments in the deep learning literature to compute these variational lower bounds through neural network approximations. Our theoretical results give insights on how to choose appropriate neural network classes that lead to minimax optimal robust estimation under Huber’s -contamination model. In particular, Theorem 3.1 and 3.2 characterize the networks which can robustly estimate the Gaussian mean by TV-GAN and JS-GAN, respectively; Theorem 4.1 is an extension to robust location estimation under the class of elliptical distributions which includes Cauchy distribution whose mean does not exist. Numerical experiments in Section 5 are provided to show the success of these GANs. 2 ROBUST ESTIMATION AND f -GAN We start with the definition of f -divergence (Csiszár, 1964; Ali & Silvey, 1966). Given a strictly convex function f that satisfies f(1) = 0, the f -GAN between two probability distributions P and Q is defined by Df (P‖Q) = ∫ f ( p q ) dQ. (2) Here, we use p(·) and q(·) to stand for the density functions of P and Q with respect to some common dominating measure. For a fully rigorous definition, see Polyanskiy & Wu (2017). Let f∗ be the convex conjugate of f . That is, f∗(t) = supu∈domf (ut− f(u)). A variational lower bound of (2) is Df (P‖Q) ≥ sup T∈T [EPT (X)− EQf∗(T (X))] . (3) Note that the inequality (3) holds for any class T , and it becomes an equality whenever the class T contains the function f ′ (p/q) (Nguyen et al., 2010). For notational simplicity, we also use f ′ for an arbitrary element of the subdifferential when the derivative does not exist. With i.i.d. observations X1, ..., Xn ∼ P , the variational lower bound (3) naturally leads to the following learning method P̂ = argmin Q∈Q sup T∈T [ 1 n n∑ i=1 T (Xi)− EQf∗(T (X)) ] . (4) The formula (4) is a powerful and general way to learn the distribution P from its i.i.d. observations. It is known as f -GAN (Nowozin et al., 2016), an extension of GAN (Goodfellow et al., 2014), which stands for generative adversarial networks. The idea is to find a P̂ so that the best discriminator T in the class T cannot tell the difference between P̂ and the empirical distribution 1n ∑n i=1 δXi . 2.1 f -LEARNING: A UNIFIED FRAMEWORK Our f -Learning framework is based on a special case of the variational lower bound (3). That is, Df (P‖Q) ≥ sup Q̃∈Q̃Q [ EP f ′ ( q̃(X) q(X) ) − EQf∗ ( f ′ ( q̃(X) q(X) ))] , (5) where q̃(·) stands for the density function of Q̃. Note that here we allow the class Q̃Q to depend on the distribution Q in the second argument of Df (P‖Q). Compare (5) with (3), and it is easy to realize that (5) is a special case of (3) with T = TQ = { f ′ ( q̃ q ) : q̃ ∈ Q̃Q } . (6) Moreover, the inequality (5) becomes an equality as long as P ∈ Q̃Q. The sample version of (5) leads to the following learning method P̂ = argmin Q∈Q sup Q̃∈Q̃Q [ 1 n n∑ i=1 f ′ ( q̃(Xi) q(Xi) ) − EQf∗ ( f ′ ( q̃(X) q(X) ))] . (7) The learning method (7) will be referred to as f -Learning in the sequel. It is a very general framework that covers many important learning procedures as special cases. For example, consider the special case where Q̃Q = Q̃ independent of Q, Q = Q̃, and f(x) = x log x. Direct calculations give f ′(x) = log x + 1 and f∗(t) = et−1. Therefore, (7) becomes P̂ = argmin Q∈Q sup Q̃∈Q 1 n n∑ i=1 log q̃(Xi) q(Xi) = argmax q∈Q 1 n n∑ i=1 log q(Xi), which is the maximum likelihood estimator (MLE). 2.2 TV-LEARNING AND DEPTH-BASED ESTIMATORS An important generator f that we will discuss here is f(x) = (x−1)+. This leads to the total variation distance Df (P‖Q) = 12 ∫ |p− q|. With f ′(x) = I{x ≥ 1} and f∗(t) = tI{0 ≤ t ≤ 1}, the TV-Learning is given by P̂ = argmin Q∈Q sup Q̃∈QQ [ 1 n n∑ i=1 I { q̃(Xi) q(Xi) ≥ 1 } −Q ( q̃ q ≥ 1 )] . (8) A closely related idea was previously explored by Yatracos (1985); Devroye & Lugosi (2012). The following proposition shows that when Q̃Q approaches toQ in some neighborhood, TV-Learning leads to robust estimators that are defined as the maximizers of various depth functions including Tukey’s depth, regression depth, and covariance depth. Proposition 2.1. The TV-Learning (8) includes the following special cases: 1. Tukey’s halfspace depth: Take Q = {N(η, Ip) : η ∈ Rp} and Q̃η = {N(η̃, Ip) : ‖η̃ − η‖ ≤ r}. As r → 0, (8) becomes θ̂ = argmax η∈Rp inf ‖u‖=1 1 n n∑ i=1 I{uT (Xi − η) ≥ 0}. (9) 2. Regression depth: Take Q = { Py,X = Py|XPX : Py|X = N(X T η, 1), η ∈ Rp } , and Q̃η = { Py,X = Py|XPX : Py|X = N(X T η̃, 1), ‖η̃ − η‖ ≤ r } . As r → 0, (8) becomes θ̂ = argmax η∈Rp inf ‖u‖=1 1 n n∑ i=1 I{uTXi(yi −XTi η) ≥ 0}. (10) 3. Covariance matrix depth: Take Q = {N(0,Γ) : Γ ∈ Ep}, where Ep stands for the class of p × p covariance matrices, and Q̃Γ = { N(0, Γ̃) : Γ̃−1 = Γ−1 + r̃uuT ∈ Ep, |r̃| ≤ r, ‖u‖ = 1 } . As r → 0, (8) becomes Σ̂ = argmin Γ∈Ep sup ‖u‖=1 [( 1 n n∑ i=1 I{|uTXi|2 ≤ uTΓu} − P(χ21 ≤ 1) ) (11) ∨ ( 1 n n∑ i=1 I{|uTXi|2 > uTΓu} − P(χ21 > 1) )] . The formula (9) is recognized as Tukey’s median, the maximizer of Tukey’s halfspace depth. A traditional understanding of Tukey’s median is that (9) maximizes the halfspace depth (Donoho & Gasko, 1992) so that θ̂ is close to the centers of all one-dimensional projections of the data. In the f -Learning framework, N(θ̂, Ip) is understood to be the minimizer of a variational lower bound of the total variation distance. The formula (10) gives the estimator that maximizes the regression depth proposed by Rousseeuw & Hubert (1999). It is worth noting that the derivation of (10) does not depend on the marginal distribution PX in the linear regression model. Finally, (11) is related to the covariance matrix depth (Zhang, 2002; Chen et al., 2018; Paindaveine & Van Bever, 2017). All of the estimators (9), (10) and (11) are proved to achieve the minimax rate for the corresponding problems under Huber’s -contamination model (Chen et al., 2018; Gao, 2017). 2.3 FROM f -LEARNING TO f -GAN The connection to various depth functions shows the importance of TV-Learning in robust estimation. However, it is well-known that depth-based estimators are very hard to compute (Amenta et al., 2000; van Kreveld et al., 1999; Rousseeuw & Struyf, 1998), which limits their applications only for very low-dimensional problems. On the other hand, the general f -GAN framework (4) has been successfully applied to learn complex distributions and images in practice (Goodfellow et al., 2014; Radford et al., 2015; Salimans et al., 2016). The major difference that gives the computational advantage to f -GAN is its flexibility in terms of designing the discriminator class T using neural networks compared with the pre-specified choice (6) in f -Learning. While f -Learning provides a unified perspective in understanding various depth-based procedures in robust estimation, we can step back into the more general f -GAN for its computational advantages, and to design efficient computational strategies. 3 ROBUST MEAN ESTIMATION VIA GAN In this section, we focus on the problem of robust mean estimation under Huber’s -contamination model. Our goal is to reveal how the choice of the class of discriminators affects robustness and statistical optimality under the simplest possible setting. That is, we have i.i.d. observations X1, ..., Xn ∼ (1− )N(θ, Ip) + Q, and we need to estimate the unknown location θ ∈ Rp with the contaminated data. Our goal is to achieve the minimax rate pn ∨ 2 with respect to the squared `2 loss uniformly over all θ ∈ Rp and all Q. 3.1 RESULTS FOR TV-GAN We start with the total variation GAN (TV-GAN) with f(x) = (x − 1)+ in (4). For the Gaussian location family, (4) can be written as θ̂ = argmin η∈Rp max D∈D [ 1 n n∑ i=1 D(Xi)− EN(η,Ip)D(X) ] , (12) with T (x) = D(x) in (4). Now we need to specify the class of discriminatorsD to solve the classification problem between N(η, Ip) and the empirical distribution 1n ∑n i=1 δXi . One of the simplest discriminator classes is the logistic regression, D = { D(x) = sigmoid(wTx+ b) : w ∈ Rp, b ∈ R } . (13) With D(x) = sigmoid(wTx+ b) = (1 + e−w T x−b)−1 in (13), the procedure (12) can be viewed as a smoothed version of TV-Learning (8). To be specific, the sigmoid function sigmoid(wTx + b) tends to an indicator function as ‖w‖ → ∞, which leads to a procedure very similar to (9). In fact, the class (13) is richer than the one used in (9), and thus (12) can be understood as the minimizer of a sharper variational lower bound than that of (9). Theorem 3.1. Assume pn + 2 ≤ c for some sufficiently small constant c > 0. With i.i.d. observations X1, ..., Xn ∼ (1− )N(θ, Ip) + Q, the estimator θ̂ defined by (12) satisfies ‖θ̂ − θ‖2 ≤ C ( p n ∨ 2 ) , with probability at least 1 − e−C′(p+n 2) uniformly over all θ ∈ Rp and all Q. The constants C,C ′ > 0 are universal. Though TV-GAN can achieve the minimax rate pn ∨ 2 under Huber’s contamination model, it may suffer from optimization difficulties especially when the distributions Q and N(θ, Ip) are far away from each other, as shown in Figure 1. 3.2 RESULTS FOR JS-GAN Given the intractable optimization property of TV-GAN, we next turn to Jensen-Shannon GAN (JS-GAN) with f(x) = x log x− (x+ 1) log x+12 . The estimator is defined by θ̂ = argmin η∈Rp max D∈D [ 1 n n∑ i=1 logD(Xi) + EN(η,Ip) log(1−D(Xi)) ] + log 4, (14) with T (x) = logD(x) in (4). This is exactly the original GAN (Goodfellow et al., 2014) specialized to the normal mean estimation problem. The advantages of JS-GAN over other forms of GAN have been studied extensively in the literature (Lucic et al., 2017; Kurach et al., 2018). Unlike TV-GAN, our experiment results show that (14) with the logistic regression discriminator class (13) is not robust to contamination. However, if we replace (13) by a neural network class with one or more hidden layers, the estimator will be robust and will also work very well numerically. To understand why and how the class of the discriminators affects the robustness property of JS-GAN, we introduce a new concept called restricted Jensen-Shannon divergence. Let g : Rp → Rd be a function that maps a p-dimensional observation to a d-dimensional feature space. The restricted Jensen-Shannon divergence between two probability distributions P and Q with respect to the feature g is defined as JSg(P,Q) = max w∈W [ EP log sigmoid(w T g(X)) + EQ log(1− sigmoid(wT g(X))) ] + log 4. In other words, P and Q are distinguished by a logistic regression classifier that uses the feature g(X). It is easy to see that JSg(P,Q) is a variational lower bound of the original Jensen-Shannon divergence. The key property of JSg(P,Q) is given by the following proposition. Proposition 3.1. AssumeW is a convex set that contains an open neighborhood of 0. Then, JSg(P,Q) = 0 if and only if EP g(X) = EQg(X). The proposition asserts that JSg(·, ·) cannot distinguish P and Q if the feature g(X) has the same expected value under the two distributions. This generalized moment matching effect has also been studied by Liu et al. (2017) for general f -GANs. However, the linear discriminator class considered in Liu et al. (2017) is parameterized in a different way compared with the discriminator class here. When we apply Proposition 3.1 to robust mean estimation, the JS-GAN is trying to match the values of 1 n ∑n i=1 g(Xi) and EN(η,Iη)g(X) for the feature g(X) used in the logistic regression classifier. This explains what we observed in our numerical experiments. A neural net without any hidden layer is equivalent to a logistic regression with a linear feature g(X) = (XT , 1)T ∈ Rp+1. Therefore, whenever η = 1n ∑n i=1Xi, we have JSg ( 1 n ∑n i=1 δXi , N(η, Ip) ) = 0, which implies that the sample mean is a global maximizer of (14). On the other hand, a neural net with at least one hidden layers involves a nonlinear feature function g(X), which is the key that leads to the robustness of (14). We will show rigorously that a neural net with one hidden layer is sufficient to make (14) robust and optimal. Consider the following class of discriminators, D = D(x) = sigmoid ∑ j≥1 wjσ(u T j x+ bj) : ∑ j≥1 |wj | ≤ κ, uj ∈ Rp, bj ∈ R . (15) The class (15) consists of two-layer neural network functions. While the dimension of the input layer is p, the dimension of the hidden layer can be arbitrary, as long as the weights have a bounded `1 norm. The nonlinear activation function σ(·) is allowed to take 1) indicator: σ(x) = I{x ≥ 1}, 2) sigmoid: σ(x) = 11+e−x , 3) ramp: σ(x) = max(min(x + 1/2, 1), 0). Other bounded activation functions are also possible, but we do not exclusively list them. The rectified linear unit (ReLU) will be studied in Appendix A. Theorem 3.2. Consider the estimator θ̂ defined by (14) with D specified by (15). Assume pn + 2 ≤ c for some sufficiently small constant c > 0, and set κ = O (√ p n + ) . With i.i.d. observations X1, ..., Xn ∼ (1− )N(θ, Ip) + Q, we have ‖θ̂ − θ‖2 ≤ C ( p n ∨ 2 ) , with probability at least 1 − e−C′(p+n 2) uniformly over all θ ∈ Rp and all Q. The constants C,C ′ > 0 are universal. 4 ELLIPTICAL DISTRIBUTIONS An advantage of Tukey’s median (9) is that it leads to optimal robust location estimation under general elliptical distributions such as Cauchy distribution whose mean does not exist. In this section, we show that JS-GAN shares the same property. A random vectorX ∈ Rp follows an elliptical distribution if it admits a representation X = θ + ξAU, where U is uniformly distributed on the unit sphere {u ∈ Rp : ‖u‖ = 1} and ξ ≥ 0 is a random variable independent of U that determines the shape of the elliptical distribution (Fang, 2017). The center and the scatter matrix is θ and Σ = AAT . For a unit vector v, let the density function of ξvTU be h. Note that h is independent of v because of the symmetry of U . Then, there is a one-to-one relation between the distribution of ξ and h, and thus the triplet (θ,Σ, h) fully parametrizes an elliptical distribution. Note that h and Σ = AAT are not identifiable, because ξA = (cξ)(c−1A) for any c > 0. Therefore, without loss of generality, we can restrict h to be a member of the following class H = { h : h(t) = h(−t), h ≥ 0, ∫ h = 1, ∫ σ(t)(1− σ(t))h(t)dt = 1 } . This makes the parametrization (θ,Σ, h) of an elliptical distribution fully identifiable, and we use EC(θ,Σ, h) to denote an elliptical distribution parametrized in this way. The JS-GAN estimator is defined as (θ̂, Σ̂, ĥ) = argmin η∈Rp,Γ∈Ep(M),g∈H max D∈D [ 1 n n∑ i=1 logD(Xi) + EEC(η,Γ,g) log(1−D(X)) ] + log 4, (16) where Ep(M) is the set of all positive semi-definite matrix with spectral norm bounded by M . Theorem 4.1. Consider the estimator θ̂ defined above withD specified by (15). AssumeM = O(1), pn+ 2 ≤ c for some sufficiently small constant c > 0, and set κ = O (√ p n + ) . With i.i.d. observations X1, ..., Xn ∼ (1− )EC(θ,Σ, h) + Q, we have ‖θ̂ − θ‖2 ≤ C ( p n ∨ 2 ) , with probability at least 1 − e−C′(p+n 2) uniformly over all θ ∈ Rp, Σ ∈ Ep(M) and all Q. The constants C,C ′ > 0 are universal. Remark 4.1. The result of Theorem 4.1 also holds (and is proved) under the strong contamination model (Diakonikolas et al., 2016a). That is, we have i.i.d. observations X1, ..., Xn ∼ P for some P satisfying TV(P,EC(θ,Σ, h)) ≤ . See its proof in Appendix D.2. Note that Theorem 4.1 guarantees the same convergence rate as in the Gaussian case for all elliptical distributions. This even includes multivariate Cauchy where mean does not exist. Therefore, the location estimator (16) is fundamentally different from Diakonikolas et al. (2016a); Lai et al. (2016), which is only designed for robust mean estimation. We will show such a difference in our numerical results. To achieve rate-optimality for robust location estimation under general elliptical distributions, the estimator (16) is different from (14) only in the generator class. They share the same discriminator class (15). This underlines an important principle for designing GAN estimators: the overall statistical complexity of the estimator is only determined by the discriminator class. The estimator (16) also outputs (Σ̂, ĥ), but we do not claim any theoretical property for (Σ̂, ĥ) in this paper. This will be systematically studied in a future project. 5 NUMERICAL EXPERIMENTS In this section, we give extensive numerical studies of robust mean estimation via GAN. After introducing the implementation details in Section 5.1, we verify our theoretical results on minimax estimation with both TVGAN and JS-GAN in Section 5.2. Comparison with other methods on robust mean estimation in the literature is given in Section 5.3. The effects of various network structures are studied in Section 5.4. Adaptation to unknown covariance is studied in Section 5.5. In all these cases, we assume i.i.d. observations are drawn from (1− )N(0p, Ip) + Q with and Q to be specified. Finally, adaptation to elliptical distributions is studied in Section 5.6. 5.1 IMPLEMENTATIONS We adopt the standard algorithmic framework of f -GANs (Nowozin et al., 2016) for the implementation of JSGAN and TV-GAN for robust mean estimation. In particular, the generator for mean estimation is Gη(Z) = Z + η with Z ∼ N(0p, Ip); the discriminator D is a multilayer perceptron (MLP), where each layer consisting of a linear map and a sigmoid activation function and the number of nodes will vary in different experiments to be specified below. Details related to algorithms, tuning, critical hyper-parameters, structures of discriminator networks and other training tricks for stabilization and acceleration are discussed in Appendix B.1. A PyTorch implementation is available at https://github.com/zhuwzh/Robust-GAN-Center. 5.2 NUMERICAL SUPPORTS FOR THE MINIMAX RATES We verify the minimax rates achieved by TV-GAN (Theorem 3.1) and JS-GAN (Theorem 3.2) via numerical experiments. Two main scenarios we consider here are √ p/n < and √ p/n > , where in both cases, various types of contamination distributions Q are considered. Specifically, the choice of contamination distributions Q includes N(µ ∗ 1p, Ip) with µ ranges in {0.2, 0.5, 1, 5}, N(0.5 ∗ 1p,Σ) and Cauchy(τ ∗ 1p). Details of the construction of the covariance matrix Σ is given in Appendix B.2. The distribution Cauchy(τ ∗ 1p) is obtained by combining p independent one-dimensional standard Cauchy with location parameter τj = 0.5. The main experimental results are summarized in Figure 2, where the `2 error we present is the maximum error among all choices of Q, and detailed numerical results can be founded in Tables 7, 8 and 9 in Appendix. We separately explore the relation between the error and one of , √ p and 1/ √ n with the other two parameters fixed. The study of the relation between the `2 error and is in the regime √ p/n < so that dominates the minimax rate. The scenario √ p/n > is considered in the study of the effects of √ p and 1/ √ n. As is shown in Figure 2, the errors are approximately linear against the corresponding parameters in all cases, which empirically verifies the conclusions of Theorem 3.1 and Theorem 3.2. 5.3 COMPARISONS WITH OTHER METHODS We perform additional experiments to compare with other methods including dimension halving (Lai et al., 2016) and iterative filtering (Diakonikolas et al., 2017) under various settings. We emphasize that our method does not require any knowledge about the nuisance parameters such as the contamination proportion . Tuning GAN is only a matter of optimization and one can tune parameters based on the objective function only. Table 1 shows the performances of JS-GAN, TV-GAN, dimension halving, and iterative filtering. The network structure, for both JS-GAN and TV-GAN, has one hidden layer with 20 hidden units when the sample size is 50,000 and 2 hidden units when sample size is 5,000. The critical hyper-parameters we apply is given in Appendix and it turns out that the choice of the hyper-parameter is robust against different models when the net structures are the same. To summarize, our method outperforms other algorithms in most cases. TV-GAN is good at cases when Q and N(0p, Ip) are non-separable but fails when Q is far away from N(0p, Ip) due to optimization issues discussed in Section 3.1 (Figure 1). On the other hand, JS-GAN stably achieves the lowest error in separable cases and also shows competitive performances for non-separable ones. 5.4 NETWORK STRUCTURES We further study the performance of JS-GAN with various structures of neural networks. The main observation is tuning networks with one-hidden layer becomes tough as the dimension grows (e.g. p ≥ 200), while a deeper network can significantly refine the situation perhaps by improving the landscape. Some experiment results are given in Table 2. On the other hand, one-hidden layer performs not worse than deeper networks when dimension is not very large (e.g. p ≤ 100). More experiments are given in Appendix B.4. Additional theoretical results for deep neural nets are given in Appendix A. 5.5 ADAPTATION TO UNKNOWN COVARIANCE The robust mean estimator constructed through JS-GAN can be easily made adaptive to unknown covariance structure, which is a special case of (16). We define (θ̂, Σ̂) = argmin η∈Rp,Γ∈Ep max D∈D [ 1 n n∑ i=1 logD(Xi) + EN(η,Γ) log(1−D(Xi)) ] + log 4, The estimator θ̂, as a result, is rate-optimal even when the true covariance matrix is not necessarily identity and is unknown (see Theorem 4.1). Below, we demonstrate some numerical evidence of the optimality of θ̂ as well as the error of Σ̂ in Table 3. 5.6 ADAPTATION TO ELLIPTICAL DISTRIBUTIONS We consider the estimation of the location parameter θ in elliptical distribution EC(θ,Σ, h) by the JS-GAN defined in (16). In particular, we study the case with i.i.d. observationsX1, ..., Xn ∼ (1− )Cauchy(θ, Ip)+ Q. The density function of Cauchy(θ,Σ) is given by p(x; θ,Σ) ∝ |Σ|−1/2 ( 1 + (x− θ)TΣ−1(x− θ) )−(1+p)/2 . Compared with Algorithm (1), the difference lies in the choice of the generator. We consider the generator G1(ξ, U) = gω(ξ)U + θ, where gω(ξ) is a non-negative neural network parametrized by ω and some random variable ξ. The random vector U is sampled from the uniform distribution on {u ∈ Rp : ‖u‖ = 1}. If the scatter matrix is unknown, we will use the generatorG2(ξ, U) = gω(ξ)AU+θ, withAAT modeling the scatter matrix. Table 4 shows the comparison with other methods. Our method still works well under Cauchy distribution, while the performance of other methods that rely on moment conditions deteriorates in this setting. ACKNOWLEDGEMENT The research of Chao Gao was supported in part by NSF grant DMS-1712957 and NSF Career Award DMS1847590. The research of Yuan Yao was supported in part by Hong Kong Research Grant Council (HKRGC) grant 16303817, National Basic Research Program of China (No. 2015CB85600), National Natural Science Foundation of China (No. 61370004, 11421110001), as well as awards from Tencent AI Lab, Si Family Foundation, Baidu Big Data Institute, and Microsoft Research-Asia. A ADDITIONAL THEORETICAL RESULTS In this section, we investigate the performance of discriminator classes of deep neural nets with the ReLU activation function. Since our goal is to learn a p-dimensional mean vector, a deep neural network discriminator without any regularization will certainly lead to overfitting. Therefore, it is crucial to design a network class with some appropriate regularizations. Inspired by the work of Bartlett (1997); Bartlett & Mendelson (2002), we consider a network class with `1 regularizations on all layers except for the second last layer with an `2 regularization. With GH1 (B) = { g(x) = ReLU(vTx) : ‖v‖1 ≤ B } , a neural network class with l + 1 layers is defined as GHl+1(B) = { g(x) = ReLU ( H∑ h=1 vhgh(x) ) : H∑ h=1 |vh| ≤ B, gh ∈ GHl (B) } . Combining with the last sigmoid layer, we obtain the following discriminator class, FHL (κ, τ,B) = { D(x) = sigmoid ∑ j≥1 wjsigmoid ( 2p∑ h=1 ujhgjh(x) + bj ) : ∑ j≥1 |wj | ≤ κ, 2p∑ h=1 u2jh ≤ 2, |bj | ≤ τ, gjh ∈ GHL−1(B) } . Note that all the activation functions are ReLU(·) except that we use sigmoid(·) in the last layer in the feature map g(·). A theoretical guarantees of the class defined above is given by the following theorem. Theorem A.1. Assume p log pn ∨ 2 ≤ c for some sufficiently small constant c > 0. Consider i.i.d. observations X1, ..., Xn ∼ (1− )N(θ, Ip) + Q and the estimator θ̂ defined by (14) with D = FHL (κ, τ,B) with H ≥ 2p, 2 ≤ L = O(1), 2 ≤ B = O(1), and τ = √ p log p. We set κ = O (√ p log p n + ) . Then, for the estimator θ̂ defined by (14) with D = FHL (κ, τ,B), we have ‖θ̂ − θ‖2 ≤ C ( p log p n ∨ 2 ) , with probability at least 1− e−C′(p log p+n 2) uniformly over all θ ∈ Rp such that ‖θ‖∞ ≤ √ log p and all Q. The theorem shows that JS-GAN with a deep ReLU network can achieve the error rate p log pn ∨ 2 with respect to the squared `2 loss. The condition ‖θ‖∞ ≤ √ log p for the ReLU network can be easily satisfied with a simple preprocessing step. We split the data into two halves, whose sizes are log n and n− log n, respectively. Then, we calculate the coordinatewise median θ̃ using the small half. It is easy to show that ‖θ̃ − θ‖∞ ≤ √ log p logn ∨ with high probability. Then, for each Xi from the second half, the conditional distribution of Xi − θ̃ given the first half is (1− )N(θ− θ̃, Ip)+ Q̃. Since √ log p logn ∨ ≤ √ log p, the condition ‖θ− θ̃‖∞ ≤ √ log p is satisfied, and thus we can apply the estimator (14) using the shifted data Xi − θ̃ from the second half. The theoretical guarantee of Theorem A.1 will be ‖θ̂ − (θ − θ̃)‖2 ≤ C ( p log p n ∨ 2 ) , with high probability. Hence, we can use θ̂ + θ̃ as the final estimator to achieve the same rate in Theorem A.1. On the other hand, our experiments show that this preprocessing step is not needed. We believe that the assumption ‖θ‖∞ ≤ √ log p is a technical artifact in the analysis of the Rademacher complexity. It can probably be dropped by a more careful analysis. B DETAILS OF EXPERIMENTS B.1 TRAINING DETAILS The implementation for JS-GAN is given in Algorithm 1, and a simple modification of the objective function leads to that of TV-GAN. Algorithm 1 JS-GAN: argminη maxw[ 1n ∑n i=1 logDw(Xi) + E log(1−Dw(Gη(Z)))] Input: Observation set S = {X1, . . . , Xn} ∈ Rp, discriminator network Dw(x), generator network Gη(z) = z+η, learning rates γd and γg for the discriminator and the generator, batch sizem, discriminator steps in each iteration K, total epochs T , average epochs T0. Initialization: Initialize η with coordinatewise median of S. Initialize w withN(0, .05) independently on each element or Xavier (Glorot & Bengio, 2010). 1: for t = 1, . . . , T do 2: for k = 1, . . . ,K do 3: Sample mini-batch {X1, . . . , Xm} from S. Sample {Z1, . . . , Zm} from N(0, Ip) 4: gw ← ∇w[ 1mΣ m i=1 logDw(Xi) + 1 mΣ m i=1 log(1−Dw(Gη(Zi)))] 5: w ← w + γdgw 6: end for 7: Sample {Z1, . . . , Zm} from N(0, Ip) 8: gη ← ∇η[ 1mΣ m i=1 log(1−Dw(Gη(Zi)))] 9: η ← η − γggη 10: end for Return: The average estimate η over the last T0 epochs. Several important implementation details are discussed below. • How to tune parameters? The choice of learning rates is crucial to the convergence rate, but the minimax game is hard to evaluate. We propose a simple strategy to tune hyper-parameters including the learning rates. Suppose we have estimators θ̂1, . . . , θ̂M with corresponding discriminator networks Dŵ1 ,. . . , DŵM . Fixing η = θ̂, we further apply gradient descent to Dw with a few more epochs (but not many in order to prevent overfitting, for example 10 epochs) and select the θ̂ with the smallest value of the objective function (14) (JS-GAN) or (12) (TV-GAN). We note that training discriminator and generator alternatively usually will not suffer from overfitting since the objective function for either the discriminator or the generator is always changing. However, we must be careful about the overfitting issue when training the discriminator alone with a fixed η, and that is why we apply an early stopping strategy here. Fortunately, the experiments show if the structures of networks are same (then of course, the dimensions of the inputs are same), the choices of hyper-parameters are robust to different models and we present the critical parameters in Table 5 to reproduce the experiment results in Table 1 and Table 2. • When to stop training? Judging convergence is a difficult task in GAN trainings, since sometimes oscillation may occur. In computer vision, people often use a task related measure and stop training once the requirement based on the measure is achieved. In our experiments below, we simply use a sufficiently large T which works well, but it is still interesting to explore an efficient early stopping rule in the future work. • How to design the network structure? Although Theorem 3.1 and Theorem 3.2 guarantee the minimax rates of TV-GAN without hidden layer and JS-GAN with one hidden layer, one may wonder whether deeper network structures will perform better. From our preliminary experiments, TV-GAN with one hidden layer is significantly better than TV-GAN without any hidden layer. Moreover, JS-GAN 200-200-100-1 JS 0.005 0.1 2 200 25 0 B.2 SETTINGS OF CONTAMINATION Q We introduce the contamination distributions Q used in the experiments. We first consider Q = N(µ, Ip) with µ ranges in {0.2, 0.5, 1, 5}. Note that the total variation distance between N(0p, Ip) and N(µ, Ip) is of order ‖0p − µ‖ = ‖µ‖. We hope to use different levels of ‖µ‖ to test the algorithm and verify the error rate in the worst case. Second, we consider Q = N(1.5 ∗ 1p,Σ) to be a Gaussian distribution with a non-trivial covariance matrix Σ. The covariance matrix is generated according to the following steps. First generate a sparse precision matrix Γ = (γij) with each entry γij = zij ∗ τij , i ≤ j, where zij and τij are independently generated from Uniform(0.4, 0.8) and Bernoulli(0.1). We then define γij = γji for all i > j and Γ̄ = Γ + (|min eig(Γ)| + 0.05)Ip to make the precision matrix symmetric and positive definite, where min eig(Γ) is the smallest eigenvalue of Γ. The covariance matrix is Σ = Γ̄−1. Finally, we consider Q to be a Cauchy distribution with independent component, and the jth component takes a standard Cauchy distribution with location parameter τj = 0.5. B.3 COMPARISON DETAILS In Section 5.3, we compare GANs with the dimension halving (Lai et al., 2016) and iterative filtering (Diakonikolas et al., 2017). • Dimension Halving. Experiments conducted are based on the code from https://github.com/ kal2000/AgnosticMeanAndCovarianceCode. The only hyper-parameter is the threshold in the outlier removal step, and we take C = 2 as suggested in the file outRemSperical.m. • Iterative Filtering. Experiments conducted are based on the code from https://github.com/ hoonose/robust-filter. We assume is known and take other hyper-parameters as suggested in the file filterGaussianMean.m. B.4 SUPPLEMENTARY EXPERIMENTS FOR NETWORK STRUCTURES The experiments are conducted with i.i.d. observations drawn from (1− )N(0p, Ip) + N(0.5 ∗ 1p, Ip) with = 0.2. Table 6 summarizes results for p = 100, n ∈ {5000, 50000} and various network structures. We observe that TV-GAN that uses neural nets with one hidden layer improves over the performance of that without any hidden layer. This indicates that the landscape of TV-GAN might be improved by a more complicated network structure. However, adding one more layer does not improve the results. For JS-GAN, we omit the results without hidden layer because of its lack of robustness (Proposition 3.1). Deeper networks sometimes improve over shallow networks, but this is not always true. We also observe that the optimal choice of the width of the hidden layer depends on the sample size. B.5 TABLES FOR TESTING THE MINIMAX RATES Tables 7, 8 and 9 show numerical results corresponding to Figure 2. C PROOFS OF PROPOSITION 2.1 AND PROPOSITION 3.1 In the first example, consider Q = {N(η, Ip) : η ∈ Rp} , Q̃η = {N(η̃, Ip) : ‖η̃ − η‖ ≤ r} . In other words,Q is the class of Gaussian location family, and Q̃η is taken to be a subset in a local neighborhood of N(η, Ip). Then, with Q = N(η, Ip) and Q̃ = N(η̃, Ip), the event q̃(X)/q(X) ≥ 1 is equivalent to ‖X − η̃‖2 ≤ ‖X − η‖2. Since ‖η̃ − η‖ ≤ r, we can write η̃ = η + r̃u for some r̃ ∈ R and u ∈ Rp that satisfy 0 ≤ r̃ ≤ r and ‖u‖ = 1. Then, (8) becomes θ̂ = argmin η∈Rp sup ‖u‖=1 0≤r̃≤r [ 1 n n∑ i=1 I { uT (Xi − η) ≥ r̃ 2 } − P ( N(0, 1) ≥ r̃ 2 )] . (18) Letting r → 0, we obtain (9), the exact formula of Tukey’s median. The next example is a linear model y|X ∼ N(XT θ, 1). Consider the following classes Q = { Py,X = Py|XPX : Py|X = N(X T η, 1), η ∈ Rp } , Q̃η = { Py,X = Py|XPX : Py|X = N(X T η̃, 1), ‖η̃ − η‖ ≤ r } . Here, Py,X stands for the joint distribution of y and X . The two classes Q and Q̃ share the same marginal distribution PX and the conditional distributions are specified by N(XT η, 1) and N(XT η̃, 1), respectively. Follow the same derivation of Tukey’s median, let r → 0, and we obtain the exact formula of regression depth (10). It is worth noting that the derivation of (10) does not depend on the marginal distribution PX . The last example is on covariance/scatter matrix estimation. For this task, we set Q = {N(0,Γ) : Γ ∈ Ep}, where Ep is the class of all p×p covariance matrices. Inspired by the derivations of Tukey depth and regression depth, it is tempting to choose Q̃ in the neighborhood of N(0,Γ). However, a native choice would lead to a definition that is not even Fisher consistent. We propose a rank-one neighborhood, given by Q̃Γ = { N(0, Γ̃) : Γ̃−1 = Γ−1 + r̃uuT ∈ Ep, |r̃| ≤ r, ‖u‖ = 1 } . (19) Then, a direct calculation gives I { dN(0, Γ̃) dN(0,Γ) (X) ≥ 1 } = I { r̃|uTX|2 ≤ log(1 + r̃uTΓu) } . (20) Since limr̃→0 log(1+r̃uTΓu) r̃uTΓu = 1, the limiting event of (20) is either I{|uTX|2 ≤ uTΓu} or I{|uTX|2 ≥ uTΓu}, depending on whether r̃ tends to zero from left or from right. Therefore, with the aboveQ and Q̃Γ, (8) becomes (11) under the limit r → 0. Even though the definition of (19) is given by a rank-one neighborhood of the inverse covariance matrix, the formula (11) can also be derived with Γ̃−1 = Γ−1 + r̃uuT in (19) replaced by Γ̃ = Γ + r̃uuT by applying the Sherman-Morrison formula. A similar formula to (11) in the literature is given by Σ̂ = argmax Γ∈Ep inf ‖u‖=1 [ 1 n n∑ i=1 I{|uTXi|2 ≤ βuTΓu} ∧ 1 n n∑ i=1 I{|uTXi|2 ≥ βuTΓu} ] , (21) which is recognized as the maximizer of what is known as the matrix depth function (Zhang, 2002; Chen et al., 2018; Paindaveine & Van Bever, 2017). The β in (21) is a scalar defined through the equation P(N(0, 1) ≤ √ β) = 3/4. It is proved in Chen et al. (2018) that Σ̂ achieves the minimax rate under Huber’s -contamination model. While the formula (11) can be derived from TV-Learning with discriminators in the form of I { dN(0,Γ̃) dN(0,Γ) (X) ≥ 1 } , a special case of (6), the formula (21) can be derived directly from TV- GAN with discriminators in the form of I { dN(0,βΓ̃) dN(0,βΓ) (X) ≥ 1 } by following a similar rank-one neighborhood argument. This completes the derivation of Proposition 2.1. To prove Proposition 3.1, we define F (w) = EP log sigmoid(wT g(X)) + EQ log(1 − sigmoid(wT g(X))) + log 4, so that JSg(P,Q) = maxw∈W F (w). The gradient and Hessian of F (w) are given by ∇F (w) = EP e−w T g(X) 1 + e−wT g(X) g(X)− EQ ew T g(X) 1 + ewT g(X) g(X), ∇2F (w) = −EP ew T g(X) (1 + ewT g(X))2 g(X)g(X)T − EQ e−w T g(X) (1 + e−wT g(X))2 g(X)g(X)T . Therefore, F (w) is concave in w, and maxw∈W F (w) is a convex optimization with a convex W . Suppose JSg(P,Q) = 0. Then maxw∈W F (w) = 0 = F (0), which implies ∇F (0) = 0, and thus we have EP g(X) = EQg(X). Now suppose EP g(X) = EQg(X), which is equivalent to ∇F (0) = 0. Therefore, w = 0 is a stationary point of a concave function, and we have JSg(P,Q) = maxw∈W F (w) = F (0) = 0. D PROOFS OF MAIN RESULTS In this section, we present proofs of all main theorems in the paper. We first establish some useful lemmas in Section D.1, and the the proofs of main theorems will be given in Section D.2. D.1 SOME AUXILIARY LEMMAS Lemma D.1. Given i.i.d. observations X1, ..., Xn ∼ P and the function class D defined in (13), we have for any δ > 0, sup D∈D ∣∣∣∣∣ 1n n∑ i=1 D(Xi)− ED(X) ∣∣∣∣∣ ≤ C (√ p n + √ log(1/δ) n ) , with probability at least 1− δ for some universal constant C > 0. Proof. Let f(X1, ..., Xn) = supD∈D ∣∣ 1 n ∑n i=1D(Xi)− ED(X) ∣∣. It is clear that f(X1, ..., Xn) satisfies the bounded difference condition. By McDiarmid’s inequality (McDiarmid, 1989), we have f(X1, ..., Xn) ≤ Ef(X1, ..., Xn) + √ log(1/δ) 2n , with probability at least 1 − δ. Using a standard symmetrization technique (Pollard, 2012), we obtain the following bound that involves Rademacher complexity, Ef(X1, ..., Xn) ≤ 2E sup D∈D ∣∣∣∣∣ 1n n∑ i=1 iD(Xi) ∣∣∣∣∣ , (22) where 1, ..., n are independent Rademacher random variables. The Rademacher complexity can be bounded by Dudley’s integral entropy bound, which gives E sup D∈D ∣∣∣∣∣ 1n n∑ i=1 iD(Xi) ∣∣∣∣∣ . E 1√n ∫ 2 0 √ logN (δ,D, ‖ · ‖n)dδ, where N (δ,D, ‖ · ‖n) is the δ-covering number of D with respect to the empirical `2 distance ‖f − g‖n = √ 1 n ∑n i=1(f(Xi)− g(Xi))2. Since the VC-dimension of D is O(p), we have N (δ,D, ‖ · ‖n) . p (16e/δ)O(p) (see Theorem 2.6.7 of Van Der Vaart & Wellner (1996)). This leads to the bound 1√ n ∫ 2 0 √ logN (δ,D, ‖ · ‖n)dδ . √ p n , which gives the desired result. Lemma D.2. Given i.i.d. observations X1, ..., Xn ∼ P, and the function class D defined in (15), we have for any δ > 0, sup D∈D ∣∣∣∣∣ 1n n∑ i=1 logD(Xi)− E logD(X) ∣∣∣∣∣ ≤ Cκ (√ p n + √ log(1/δ) n ) , with probability at least 1− δ for some universal constant C > 0. Proof. Let f(X1, ..., Xn) = supD∈D ∣∣ 1 n ∑n i=1 logD(Xi)− E logD(X) ∣∣. Since sup D∈D sup x | log(2D(x))| ≤ κ, we have sup x1,...,xn,x′i |f(x1, ..., xn)− f(x1, ..., xi−1, x′i, xi+1, ..., xn)| ≤ 2κ n . Therefore, by McDiarmid’s inequality (McDiarmid, 1989), we have f(X1, ..., Xn) ≤ Ef(X1, ..., Xn) + κ √ 2 log(1/δ) n , (23) with probability at least 1−δ. By the same argument of (22), it is sufficient to bound the Rademacher complexity E supD∈D ∣∣ 1 n ∑n i=1 i log(2D(Xi)) ∣∣. Since the function ψ(x) = log(2sigmoid(x)) has Lipschitz constant 1 and satisfies ψ(0) = 0, we have E sup D∈D ∣∣∣∣∣ 1n n∑ i=1 i log(2D(Xi)) ∣∣∣∣∣ ≤ 2E sup∑ j≥1 |wj |≤κ,uj∈Rp,bj∈R ∣∣∣∣∣∣ 1n n∑ i=1 i ∑ j≥1 wjσ(u T j Xi + bj) ∣∣∣∣∣∣ , which uses Theorem 12 of Bartlett & Mendelson (2002). By Hölder’s inequality, we further have E sup∑ j≥1 |wj |≤κ,uj∈Rp,bj∈R ∣∣∣∣∣∣ 1n n∑ i=1 i ∑ j≥1 wjσ(u T j Xi + bj) ∣∣∣∣∣∣ ≤ κEmax j≥1 sup uj∈Rp,bj∈R ∣∣∣∣∣ 1n n∑ i=1 iσ(u T j Xi + bj) ∣∣∣∣∣ = κE sup u∈Rp,b∈R ∣∣∣∣∣ 1n n∑ i=1 iσ(u TXi + b) ∣∣∣∣∣ . Note that for a monotone function σ : R→ [0, 1], the VC-dimension of the class {σ(uTx+ b) : u ∈ R, b ∈ R} is O(p). Therefore, by using the same argument of Dudley’s integral entropy bound in the proof Lemma D.1, we have E sup u∈Rp,b∈R ∣∣∣∣∣ 1n n∑ i=1 iσ(u TXi + b) ∣∣∣∣∣ . √ p n , which leads to the desired result. Lemma D.3. Given i.i.d. observations X1, .., Xn ∼ N(θ, Ip) and the function class FHL (κ, τ,B). Assume ‖θ‖∞ ≤ √ log p and set τ = √ p log p. We have for any δ > 0, sup D∈FHL (κ,τ,B) ∣∣∣∣∣ 1n n∑ i=1 logD(Xi)− E logD(X) ∣∣∣∣∣ ≤ Cκ ( (2B)L−1 √ p log p n + √ log(1/δ) n ) , with probability at least 1− δ for some universal constants C > 0. Proof. Write f(X1, ..., Xn) = supD∈FHL (κ,τ,B) ∣∣ 1 n ∑n i=1 logD(Xi)− E logD(X) ∣∣. Then, the inequality (23) holds with probability at least 1 − δ. It is sufficient to analyze the Rademacher complexity. Using the fact that the function log(2sigmoid(x)) is Lipschitz and Hölder’s inequality, we have E sup D∈FHL (κ,τ,B) ∣∣∣∣∣ 1n n∑ i=1 i log(2D(Xi)) ∣∣∣∣∣ ≤ 2E sup ‖w‖1≤κ,‖uj∗‖2≤2,|bj |≤τ,gjh∈GHL−1(B) ∣∣∣∣∣∣ 1n n∑ i=1 i ∑ j≥1 wjsigmoid ( 2p∑ h=1 ujhgjh(Xi) + bj )∣∣∣∣∣∣ ≤ 2κE sup ‖u‖2≤2,|b|≤τ,gh∈GHL−1(B) ∣∣∣∣∣ 1n n∑ i=1 isigmoid ( 2p∑ h=1 uhgh(Xi) + b )∣∣∣∣∣ ≤ 4κE sup ‖u‖2≤2,|b|≤τ,gh∈GHL−1(B) ∣∣∣∣∣ 1n n∑ i=1 i ( 2p∑ h=1 uhgh(Xi) + b )∣∣∣∣∣ ≤ 8√pκE sup g∈GHL−1(B) ∣∣∣∣∣ 1n n∑ i=1 ig(Xi) ∣∣∣∣∣+ 4κτE ∣∣∣∣∣ 1n n∑ i=1 i ∣∣∣∣∣ . Now we use the notation Zi = Xi − θ ∼ N(0, Ip) for i = 1, ..., n. We bound E supg∈GHL−1(B) ∣∣ 1 n ∑n i=1 ig(Zi + θ) ∣∣ by induction. Since E ( sup g∈GH1 (B) 1 n n∑ i=1 ig(Zi + θ) ) ≤ E ( sup ‖v‖1≤B 1 n n∑ i=1 iv T (Zi + θ) ) ≤ B ( E ∣∣∣∣∣ 1n n∑ i=1 iZi ∣∣∣∣∣ ∞ + ‖θ‖∞E ∣∣∣∣∣ 1n n∑ i=1 i ∣∣∣∣∣ ) ≤ CB √ log p+ ‖θ‖∞√ n , and E ( sup g∈GHl+1(B) 1 n n∑ i=1 ig(Zi + θ) ) ≤ E ( sup ‖v‖1≤B,gh∈GHl (B) 1 n n∑ i=1 i H∑ h=1 vhgh(Zi + θ) ) ≤ BE ( sup g∈GHl (B) ∣∣∣∣∣ 1n n∑ i=1 ig(Zi + θ) ∣∣∣∣∣ ) ≤ 2BE ( sup g∈GHl (B) 1 n n∑ i=1 ig(Zi + θ) ) , we have E ( sup g∈GHL−1(B) 1 n n∑ i=1 ig(Zi + θ) ) ≤ C(2B)L−1 √ log p+ ‖θ‖∞√ n . Combining the above inequalities, we get E ( sup D∈FHL (κ,τ,B) 1 n n∑ i=1 i logD(Zi + θ) ) ≤ Cκ ( √ p(2B)L−1 √ log p+ ‖θ‖∞√ n + τ√ n ) . This leads to the desired result under the conditions on τ and ‖θ‖∞. D.2 PROOFS OF MAIN THEOREMS Proof of Theorem 3.1. We first introduce some notations. Define F (P, η) = maxw,b Fw,b(P, η), where Fw,b(P, η) = EP sigmoid(w TX + b)− EN(η,Ip)sigmoid(w TX + b). With this definition, we have θ̂ = argminη F (Pn, η), where we use Pn for the empirical distribution 1 n ∑n i=1 δXi . We shorthand N(η, Ip) by Pη , and then F (Pθ, θ̂) ≤ F ((1− )Pθ + Q, θ̂) + (24) ≤ F (Pn, θ̂) + + C (√ p n + √ log(1/δ) n ) (25) ≤ F (Pn, θ) + + C (√ p n + √ log(1/δ) n ) (26) ≤ F ((1− )Pθ + Q, θ) + + 2C (√ p n + √ log(1/δ) n ) (27) ≤ F (Pθ, θ) + 2 + 2C (√ p n + √ log(1/δ) n ) (28) = 2 + 2C (√ p n + √ log(1/δ) n ) . (29) With probability at least 1− δ, the above inequalities hold. We will explain each inequality. Since F ((1− )Pθ + Q, η) = max w,b [(1− )Fw,b(Pθ, η) + Fw,b(Q, η)] , we have sup η |F ((1− )Pθ + Q, η)− F (Pθ, η)| ≤ , which implies (24) and (28). The inequalities (25) and (27) are implied by Lemma D.1 and the fact that sup η |F (Pn, η)− F ((1− )Pθ + Q, η)| ≤ sup w,b ∣∣∣∣∣ 1n n∑ i=1 sigmoid(wTXi + b)− Esigmoid(wTX + b) ∣∣∣∣∣ . The inequality (26) is a direct consequence of the definition of θ̂. Finally, it is easy to see that F (Pθ, θ) = 0, which gives (29). In summary, we have derived that with probability at least 1− δ, Fw,b(Pθ, θ̂) ≤ 2 + 2C (√ p n + √ log(1/δ) n ) , for all w ∈ Rp and b ∈ R. For any u ∈ Rp such that ‖u‖ = 1, we take w = u and b = −uT θ, and we have f(0)− f(uT (θ − θ̂)) ≤ 2 + 2C (√ p n + √ log(1/δ) n ) , where f(t) = ∫ 1 1+ez+tφ(z)dz, with φ(·) being the probability density function of N(0, 1). It is not hard to see that as long as |f(t)− f(0)| ≤ c for some sufficiently small constant c > 0, then |f(t)− f(0)| ≥ c′|t| for some constant c′ > 0. This implies ‖θ̂ − θ‖ = sup ‖u‖=1 |uT (θ̂ − θ)| ≤ 1 c′ sup ‖u‖=1 ∣∣∣f(0)− f(uT (θ − θ̂))∣∣∣ . + √ p n + √ log(1/δ) n , with probability at least 1− δ. The proof is complete. Proof of Theorem 3.2. We continue to use Pη to denote N(η, Ip). Define F (P, η) = max ‖w‖1≤κ,u,b Fw,u,b(P, η), where Fw,u,b(P, η) = EP logD(X) + EN(η,Ip) log (1−D(X)) + log 4, with D(x) = sigmoid (∑ j≥1 wjσ(u T j x+ bj) ) . Then, F (Pθ, θ̂) ≤ F ((1− )Pθ + Q, θ̂) + 2κ (30) ≤ F (Pn, θ̂) + 2κ + Cκ (√ p n + √ log(1/δ) n ) (31) ≤ F (Pn, θ) + 2κ + Cκ (√ p n + √ log(1/δ) n ) (32) ≤ F ((1− )Pθ + Q, θ) + 2κ + 2Cκ (√ p n + √ log(1/δ) n ) (33) ≤ F (Pθ, θ) + 4κ + 2Cκ (√ p n + √ log(1/δ) n ) (34) = 4κ + 2Cκ (√ p n + √ log(1/δ) n ) . The inequalities (30)-(34) follow similar arguments for (24)-(28). To be specific, (31) and (33) are implied by Lemma D.2, and (32) is a direct consequence of the definition of θ̂. To see (30) and (34), note that for any w such that ‖w‖1 ≤ κ, we have | log(2D(X))| ≤ ∣∣∣∣∣∣ ∑ j≥1 wjσ(u T j X + bj) ∣∣∣∣∣∣ ≤ κ. A similar argument gives the same bound for | log(2(1−D(X)))|. This leads to sup η |F ((1− )Pθ + Q, η)− F (Pθ, η)| ≤ 2κ , (35) which further implies (30) and (34). To summarize, we have derived that with probability at least 1− δ, Fw,u,b(Pθ, θ̂) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) , for all ‖w‖1 ≤ κ, ‖uj‖ ≤ 1 and bj . Take w1 = κ, wj = 0 for all j > 1, u1 = u for some unit vector u and b1 = −uT θ, and we get fuT (θ̂−θ)(κ) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) , (36) where fδ(t) = E log 2 1 + e−tσ(Z) + E log 2 1 + etσ(Z+δ) , (37) with Z ∼ N(0, 1). Direct calculations give f ′δ(t) = E e−tσ(Z) 1 + e−tσ(Z) σ(Z)− E e tσ(Z+δ) 1 + etσ(Z+δ) σ(Z + δ), f ′′δ (t) = −Eσ(Z)2 e−tσ(Z) (1 + e−tσ(Z))2 − Eσ(Z + δ)2 e tσ(Z+δ) (1 + etσ(Z+δ))2 . (38) Therefore, fδ(0) = 0, f ′δ(0) = 1 2 (Eσ(Z)− Eσ(Z + δ)), and f ′′ δ (t) ≥ − 12 . By the inequality fδ(κ) ≥ fδ(0) + κf ′δ(0)− 1 4 κ2, we have κf ′δ(0) ≤ fδ(κ) + κ2/4. In view of (36), we have κ 2 (∫ σ(z)φ(z)dz − ∫ σ(z + uT (θ̂ − θ))φ(z)dz ) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) + κ2 4 . It is easy to see that for the choices of σ(·), ∫ σ(z)φ(z)dz − ∫ σ(z + t)φ(z)dz is locally linear with respect to t. This implies that κ‖θ̂ − θ‖ = κ sup ‖u‖=1 uT (θ̂ − θ) . κ ( + √ p n + √ log(1/δ) n ) + κ2. Therefore, with a κ . √ p n + , the proof is complete. Proof of Theorem 4.1. We use Pθ,Σ,h to denote the elliptical distribution EC(θ,Σ, h). Define F (P, (η,Γ, g)) = max ‖w‖1≤κ,u,b Fw,u,b(P, (η,Γ, g)), where Fw,u,b(P, (η,Γ, g)) = EP logD(X) + EEC(η,Γ,g) log (1−D(X)) + log 4, with D(x) = sigmoid (∑ j≥1 wjσ(u T j x+ bj) ) . Let P be the data generating process that satisfies TV(P, Pθ,Σ,h) ≤ , and then there exist probability distributions Q1 and Q2, such that P + Q1 = Pθ,Σ,h + Q2. The explicit construction of Q1, Q2 is given in the proof of Theorem 5.1 of Chen et al. (2018). This implies that |F (P, (η,Γ, g))− F (Pθ,Σ,h, (η,Γ, g))| ≤ sup ‖w‖1≤κ,u,b |Fw,u,b(P, (η,Γ, g))− Fw,u,b(Pθ,Σ,h, (η,Γ, g))| = sup ‖w‖1≤κ,u,b |EQ2 log(2D(X))− EQ1 log(2D(X))| ≤ 2κ . (39) Then, the same argument in Theorem 3.2 (with (35) replaced by (39)) leads to the fact that with probability at least 1− δ, Fw,u,b(Pθ,Σ,h, (θ̂, Σ̂, ĥ)) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) , for all ‖w‖1 ≤ κ, ‖uj‖ ≤ 1 and bj . Take w1 = κ, wj = 0 for all j > 1, u1 = u/ √ uT Σ̂u for some unit vector u and b1 = −uT θ/ √ uT Σ̂u, and we get fuT (θ̂−θ)√ uT Σ̂u (κ) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) , where fδ(t) = ∫ log ( 2 1 + e−tσ(∆s) ) h(s)ds+ ∫ log ( 2 1 + etσ(δ+s) ) ĥ(s)ds, where δ = u T (θ̂−θ)√ uT Σ̂u and ∆ = √ uTΣu√ uT Σ̂u . A similar argument to the proof of Theorem 3.2 gives κ 2 (∫ σ(∆s)h(s)ds− ∫ σ(δ + s)ĥ(s)ds ) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) + κ2 4 . Since ∫ σ(∆s)h(s)ds = 1 2 = ∫ σ(s)ĥ(s)ds, the above bound is equivalent to κ 2 (H(0)−H(δ)) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) + κ2 4 , where H(δ) = ∫ σ(δ+ s)ĥ(s)ds. The above bound also holds for κ2 (H(δ)−H(0)) by a symmetric argument, and therefore the same bound holds for κ2 |H(δ)−H(0)|. Since H ′(0) = ∫ σ(s)(1− σ(s))ĥ(s)ds = 1, H(δ) is locally linear at δ = 0, which leads to a desired bound for δ = u T (θ̂−θ)√ uT Σ̂u . Finally, since uT Σ̂u ≤ M , we get the bound for uT (θ̂ − θ). The proof is complete by taking supreme over all unit vector u. Proof of Theorem A.1. We continue to use Pη to denote N(η, Ip). Define F (P, η) = sup D∈FHL (κ,τ,B) FD(P, η), with FD(P, η) = EP logD(X) + EN(η,Ip) log(1−D(X)) + log 4. Follow the same argument in the proof of Theorem 3.2, use Lemma D.3, and we have FD(Pθ, θ̂) ≤ Cκ ( + (2B)L−1 √ p log p n + √ log(1/δ) n ) , uniformly over D ∈ FHL (κ, τ,B) with probability at least 1 − δ. Choose w1 = κ and wj = 0 for all wj > 1. For any unit vector ũ ∈ Rp, take u1h = −u1(h+p) = ũh for h = 1, ..., p and b1 = −ũT θ. For h = 1, ..., p, set g1h(x) = max(xh, 0). For h = p + 1, ..., 2p, set g1h(x) = max(−xh−p, 0). It is obvious that such u and b satisfy ∑ h u 2 1h ≤ 2 and |b1| ≤ ‖θ‖ ≤ √ p‖θ‖∞ ≤ √ p log p. We need to show both the functions max(x, 0) and max(−x, 0) are elements of GHL−1(B). This can be proved by induction. It is obvious that max(xh, 0),max(−xh, 0) ∈ GH1 (B) for any h = 1, ..., p. Suppose we have max(xh, 0),max(−xh, 0) ∈ GHl (B) for any h = 1, ..., p. Then, max (max(xh, 0)−max(−xh, 0), 0) = max(xh, 0), max (max(−xh, 0)−max(xh, 0), 0) = max(−xh, 0). Therefore, max(xh, 0),max(−
1. What is the main contribution of the paper regarding robust high-dimensional estimation? 2. What are the strengths and weaknesses of the proposed approach compared to previous works? 3. How does the reviewer assess the novelty and theoretical appeal of the paper's framework? 4. What are some concerns regarding the lack of provable guarantees for the proposed algorithms? 5. Can the proposed methods handle stronger notions of corruption considered in related works? 6. Are there any questions about the training time of the GANs or their performance in specific scenarios?
Review
Review The paper considers the problem of robust high dimensional estimation in Huber’s contamination model. The algorithm is given samples from a distribution (1 - eps) * P + eps * Q, where P is a “nice” distribution (e.g. a Gaussian), eps is the fraction of contaminated points, and Q is some unconstrained noise distribution. The goal is then to estimate parameters of P as well as possible, given this noise. The settings they primarily consider in this paper are when P is a Gaussian with unknown mean and identity covariance, or when it is a Gaussian with unknown covariance. Classical estimators such as Tukey depth or matrix depth for these problems achieve optimal minimax rates, but are computationally expensive to compute. However, recent work of [1,2] propose efficient estimators for this problem that (nearly) achieve these rates. This paper considers a different approach to this problem. They observe that in the case when P is a Gaussian, these classical depth functions (or minor variations thereof) can be written as the asymptotic limits of certain types of GANs. They then demonstrate that for specific choices for the architecture and regularization of the discriminator, the global optima of this GAN objective achieves minimax optimal error and rates in Huber’s contamination model. Unfortunately, they do not prove that their algorithm achieves these global minima. As a result they do not have any provable guarantees for their algorithms. However, they show experimentally that against many choices of noise distribution, their algorithms obtain good error, both for mean estimation and covariance estimation (at least, the JS-GAN seems to consistently succeed. They acknowledge that the TV-GAN seems to be unstable in certain regimes). Pros: - I think the question of finding algorithmic equivalents of Tukey median is a very interesting question, and this is an interesting attempt. - I did not replicate their experiments on GANs, but the experimental numbers seem promising. However, I have some mixed feelings about this (see below). Cons: - A clear disadvantage of the approach to prior algorithmic work is that the algorithms proposed in the paper do not have provable guarantees. For settings such as secure machine learning, the lack of such guarantees is problematic. Given that previous works give efficient (i.e. practical) algorithms for these problems with provable guarantees, I am unclear how much impact this will have in practice. - Given that TV-GAN is known to fail (as shown in Table 6), it is unclear how useful the numbers for it are in Table 1. Without these numbers, it then appears that JS-GAN and the filtering algorithm often achieve comparable results, although it is very interesting that JS-GAN is consistently slightly better. - I feel that the authors fall short of their goal to make a good algorithmic analog of these depth-based estimators. This is a subtle but important point, so let me justify this. As the authors explain, the major advantage of such estimators would be that they are model-free: they should give robustness for a number of settings, not just Gaussians, but also elliptical distributions, sub-gaussian distributions, etc. However, the correspondence that the authors derive to their GAN formulation of depth heavily leverages the Gaussianity of the underlying distribution. Specifically, it leverages the fact that the Scheffe set between two Gaussians is a half-plane, which clearly fails for more general distributions. As a result, it appears to me that this variational formulation of depth succeeds only in a very model-specific setting. As a result, from a theoretical perspective it is unclear what advantage this formulation has. Questions: - How long does it take to train the GANs? Is it comparable to the runtime of the other algorithms? - Can these algorithms work in the stronger notions of corruption considered in [1, 2]? Overall conclusion: The paper proposes a novel framework for robust estimation. However, in light of the previous provable and much simpler algorithms for robust estimation, in the end it seems to me that deep learning is an unnecessarily complicated approach to this problem. While the authors demonstrate some experimental improvement in the test cases they tried, the lack of provable guarantees for their approach limits the theoretical appeal of their paper. More conceptually, I am unconvinced that their approach is the correct approach to understanding algorithmic notions of depth, for the reasons described above. [1] Kevin A Lai, Anup B Rao, and Santosh Vempala. Agnostic estimation of mean and covariance. In Foundations of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on, pp. 665–674. IEEE, 2016. [2] Ilias Diakonikolas, Gautam Kamath, Daniel M Kane, Jerry Li, Ankur Moitra, and Alistair Stewart. Being robust (in high dimensions) can be practical. arXiv preprint arXiv:1703.00893, 2017.
ICLR
Title ROBUST ESTIMATION VIA GENERATIVE ADVERSARIAL NETWORKS Abstract Robust estimation under Huber’s -contamination model has become an important topic in statistics and theoretical computer science. Statistically optimal procedures such as Tukey’s median and other estimators based on depth functions are impractical because of their computational intractability. In this paper, we establish an intriguing connection between f -GANs and various depth functions through the lens of f -Learning. Similar to the derivation of f GANs, we show that these depth functions that lead to statistically optimal robust estimators can all be viewed as variational lower bounds of the total variation distance in the framework of f -Learning. This connection opens the door of computing robust estimators using tools developed for training GANs. In particular, we show in both theory and experiments that some appropriate structures of discriminator networks with hidden layers in GANs lead to statistically optimal robust location estimators for both Gaussian distribution and general elliptical distributions where first moment may not exist. 1 INTRODUCTION In the setting of Huber’s -contamination model (Huber, 1964; 1965), one has i.i.d observations X1, ..., Xn ∼ (1− )Pθ + Q, (1) and the goal is to estimate the model parameter θ. Under the data generating process (1), each observation has a 1 − probability to be drawn from Pθ and the other probability to be drawn from the contamination distributionQ. The presence of an unknown contamination distribution poses both statistical and computational challenges to the problem. For example, consider a normal mean estimation problem with Pθ = N(θ, Ip). Due to the contamination of data, the sample average, which is optimal when = 0, can be arbitrarily far away from the true mean if Q charges a positive probability at infinity. Moreover, even robust estimators such as coordinatewise median and geometric median are proved to be suboptimal under the setting of (1) (Chen et al., 2018; Diakonikolas et al., 2016a; Lai et al., 2016). The search for both statistically optimal and computationally feasible procedures has become a fundamental problem in areas including statistics and computer science. For the normal mean estimation problem, it has been shown in Chen et al. (2018) that the minimax rate with respect to the squared `2 loss is pn ∨ 2, and is achieved by Tukey’s median (Tukey, 1975). Despite the statistical optimality of Tukey’s median, its computation is not tractable. In fact, even an approximate algorithm takes O(eCp) in time (Amenta et al., 2000; Chan, 2004; Rousseeuw & Struyf, 1998). Recent developments in theoretical computer science are focused on the search of computationally tractable algorithms for estimating θ under Huber’s -contamination model (1). The success of the efforts started from two fundamental papers Diakonikolas et al. (2016a); Lai et al. (2016), where two different but related computational strategies “iterative filtering” and “dimension halving” were proposed to robustly estimate the normal mean. These algorithms can provably achieve the minimax rate pn ∨ 2 up to a poly-logarithmic factor in polynomial time. The main idea behind the two methods is a critical fact that a good robust moment estimator can be certified efficiently by higher moments. This idea was later further extended (Diakonikolas et al., 2017; Du et al., 2017; Diakonikolas et al., 2016b; 2018a;c;b; Kothari et al., 2018) to develop robust and computable procedures for various other problems. However, many of the computationally feasible procedures for robust mean estimation in the literature rely on the knowledge of covariance matrix and sometimes the knowledge of contamination proportion. Even though these assumptions can be relaxed, nontrivial modifications of the algorithms are required for such extensions and statistical error rates may also be affected. Compared with these computationally feasible procedures proposed in the recent literature for robust estimation, Tukey’s median (9) and other depth-based estimators (Rousseeuw & Hubert, 1999; Mizera, 2002; Zhang, 2002; Mizera & Müller, 2004; Paindaveine & Van Bever, 2017) have some indispensable advantages in terms of their statistical properties. First, the depth-based estimators have clear objective functions that can be interpreted from the perspective of projection pursuit (Mizera, 2002). Second, the depth-based procedures are adaptive to unknown nuisance parameters in the models such as covariance structures, contamination proportion, and error distributions (Chen et al., 2018; Gao, 2017). Last but not least, Tukey’s depth and other depth functions are mostly designed for robust quantile estimation, while the recent advancements in the theoretical computer science literature are all focused on robust moments estimation. Although this is not an issue when it comes to normal mean estimation, the difference is fundamental for robust estimation under general settings such as elliptical distributions where moments do not necessarily exist. Given the desirable statistical properties discussed above, this paper is focused on the development of computational strategies of depth-like procedures. Our key observation is that robust estimators that are maximizers of depth functions, including halfspace depth, regression depth and covariance matrix depth, can all be derived under the framework of f -GAN (Nowozin et al., 2016). As a result, these depth-based estimators can be viewed as minimizers of variational lower bounds of the total variation distance between the empirical measure and the model distribution (Proposition 2.1). This observation allows us to leverage the recent developments in the deep learning literature to compute these variational lower bounds through neural network approximations. Our theoretical results give insights on how to choose appropriate neural network classes that lead to minimax optimal robust estimation under Huber’s -contamination model. In particular, Theorem 3.1 and 3.2 characterize the networks which can robustly estimate the Gaussian mean by TV-GAN and JS-GAN, respectively; Theorem 4.1 is an extension to robust location estimation under the class of elliptical distributions which includes Cauchy distribution whose mean does not exist. Numerical experiments in Section 5 are provided to show the success of these GANs. 2 ROBUST ESTIMATION AND f -GAN We start with the definition of f -divergence (Csiszár, 1964; Ali & Silvey, 1966). Given a strictly convex function f that satisfies f(1) = 0, the f -GAN between two probability distributions P and Q is defined by Df (P‖Q) = ∫ f ( p q ) dQ. (2) Here, we use p(·) and q(·) to stand for the density functions of P and Q with respect to some common dominating measure. For a fully rigorous definition, see Polyanskiy & Wu (2017). Let f∗ be the convex conjugate of f . That is, f∗(t) = supu∈domf (ut− f(u)). A variational lower bound of (2) is Df (P‖Q) ≥ sup T∈T [EPT (X)− EQf∗(T (X))] . (3) Note that the inequality (3) holds for any class T , and it becomes an equality whenever the class T contains the function f ′ (p/q) (Nguyen et al., 2010). For notational simplicity, we also use f ′ for an arbitrary element of the subdifferential when the derivative does not exist. With i.i.d. observations X1, ..., Xn ∼ P , the variational lower bound (3) naturally leads to the following learning method P̂ = argmin Q∈Q sup T∈T [ 1 n n∑ i=1 T (Xi)− EQf∗(T (X)) ] . (4) The formula (4) is a powerful and general way to learn the distribution P from its i.i.d. observations. It is known as f -GAN (Nowozin et al., 2016), an extension of GAN (Goodfellow et al., 2014), which stands for generative adversarial networks. The idea is to find a P̂ so that the best discriminator T in the class T cannot tell the difference between P̂ and the empirical distribution 1n ∑n i=1 δXi . 2.1 f -LEARNING: A UNIFIED FRAMEWORK Our f -Learning framework is based on a special case of the variational lower bound (3). That is, Df (P‖Q) ≥ sup Q̃∈Q̃Q [ EP f ′ ( q̃(X) q(X) ) − EQf∗ ( f ′ ( q̃(X) q(X) ))] , (5) where q̃(·) stands for the density function of Q̃. Note that here we allow the class Q̃Q to depend on the distribution Q in the second argument of Df (P‖Q). Compare (5) with (3), and it is easy to realize that (5) is a special case of (3) with T = TQ = { f ′ ( q̃ q ) : q̃ ∈ Q̃Q } . (6) Moreover, the inequality (5) becomes an equality as long as P ∈ Q̃Q. The sample version of (5) leads to the following learning method P̂ = argmin Q∈Q sup Q̃∈Q̃Q [ 1 n n∑ i=1 f ′ ( q̃(Xi) q(Xi) ) − EQf∗ ( f ′ ( q̃(X) q(X) ))] . (7) The learning method (7) will be referred to as f -Learning in the sequel. It is a very general framework that covers many important learning procedures as special cases. For example, consider the special case where Q̃Q = Q̃ independent of Q, Q = Q̃, and f(x) = x log x. Direct calculations give f ′(x) = log x + 1 and f∗(t) = et−1. Therefore, (7) becomes P̂ = argmin Q∈Q sup Q̃∈Q 1 n n∑ i=1 log q̃(Xi) q(Xi) = argmax q∈Q 1 n n∑ i=1 log q(Xi), which is the maximum likelihood estimator (MLE). 2.2 TV-LEARNING AND DEPTH-BASED ESTIMATORS An important generator f that we will discuss here is f(x) = (x−1)+. This leads to the total variation distance Df (P‖Q) = 12 ∫ |p− q|. With f ′(x) = I{x ≥ 1} and f∗(t) = tI{0 ≤ t ≤ 1}, the TV-Learning is given by P̂ = argmin Q∈Q sup Q̃∈QQ [ 1 n n∑ i=1 I { q̃(Xi) q(Xi) ≥ 1 } −Q ( q̃ q ≥ 1 )] . (8) A closely related idea was previously explored by Yatracos (1985); Devroye & Lugosi (2012). The following proposition shows that when Q̃Q approaches toQ in some neighborhood, TV-Learning leads to robust estimators that are defined as the maximizers of various depth functions including Tukey’s depth, regression depth, and covariance depth. Proposition 2.1. The TV-Learning (8) includes the following special cases: 1. Tukey’s halfspace depth: Take Q = {N(η, Ip) : η ∈ Rp} and Q̃η = {N(η̃, Ip) : ‖η̃ − η‖ ≤ r}. As r → 0, (8) becomes θ̂ = argmax η∈Rp inf ‖u‖=1 1 n n∑ i=1 I{uT (Xi − η) ≥ 0}. (9) 2. Regression depth: Take Q = { Py,X = Py|XPX : Py|X = N(X T η, 1), η ∈ Rp } , and Q̃η = { Py,X = Py|XPX : Py|X = N(X T η̃, 1), ‖η̃ − η‖ ≤ r } . As r → 0, (8) becomes θ̂ = argmax η∈Rp inf ‖u‖=1 1 n n∑ i=1 I{uTXi(yi −XTi η) ≥ 0}. (10) 3. Covariance matrix depth: Take Q = {N(0,Γ) : Γ ∈ Ep}, where Ep stands for the class of p × p covariance matrices, and Q̃Γ = { N(0, Γ̃) : Γ̃−1 = Γ−1 + r̃uuT ∈ Ep, |r̃| ≤ r, ‖u‖ = 1 } . As r → 0, (8) becomes Σ̂ = argmin Γ∈Ep sup ‖u‖=1 [( 1 n n∑ i=1 I{|uTXi|2 ≤ uTΓu} − P(χ21 ≤ 1) ) (11) ∨ ( 1 n n∑ i=1 I{|uTXi|2 > uTΓu} − P(χ21 > 1) )] . The formula (9) is recognized as Tukey’s median, the maximizer of Tukey’s halfspace depth. A traditional understanding of Tukey’s median is that (9) maximizes the halfspace depth (Donoho & Gasko, 1992) so that θ̂ is close to the centers of all one-dimensional projections of the data. In the f -Learning framework, N(θ̂, Ip) is understood to be the minimizer of a variational lower bound of the total variation distance. The formula (10) gives the estimator that maximizes the regression depth proposed by Rousseeuw & Hubert (1999). It is worth noting that the derivation of (10) does not depend on the marginal distribution PX in the linear regression model. Finally, (11) is related to the covariance matrix depth (Zhang, 2002; Chen et al., 2018; Paindaveine & Van Bever, 2017). All of the estimators (9), (10) and (11) are proved to achieve the minimax rate for the corresponding problems under Huber’s -contamination model (Chen et al., 2018; Gao, 2017). 2.3 FROM f -LEARNING TO f -GAN The connection to various depth functions shows the importance of TV-Learning in robust estimation. However, it is well-known that depth-based estimators are very hard to compute (Amenta et al., 2000; van Kreveld et al., 1999; Rousseeuw & Struyf, 1998), which limits their applications only for very low-dimensional problems. On the other hand, the general f -GAN framework (4) has been successfully applied to learn complex distributions and images in practice (Goodfellow et al., 2014; Radford et al., 2015; Salimans et al., 2016). The major difference that gives the computational advantage to f -GAN is its flexibility in terms of designing the discriminator class T using neural networks compared with the pre-specified choice (6) in f -Learning. While f -Learning provides a unified perspective in understanding various depth-based procedures in robust estimation, we can step back into the more general f -GAN for its computational advantages, and to design efficient computational strategies. 3 ROBUST MEAN ESTIMATION VIA GAN In this section, we focus on the problem of robust mean estimation under Huber’s -contamination model. Our goal is to reveal how the choice of the class of discriminators affects robustness and statistical optimality under the simplest possible setting. That is, we have i.i.d. observations X1, ..., Xn ∼ (1− )N(θ, Ip) + Q, and we need to estimate the unknown location θ ∈ Rp with the contaminated data. Our goal is to achieve the minimax rate pn ∨ 2 with respect to the squared `2 loss uniformly over all θ ∈ Rp and all Q. 3.1 RESULTS FOR TV-GAN We start with the total variation GAN (TV-GAN) with f(x) = (x − 1)+ in (4). For the Gaussian location family, (4) can be written as θ̂ = argmin η∈Rp max D∈D [ 1 n n∑ i=1 D(Xi)− EN(η,Ip)D(X) ] , (12) with T (x) = D(x) in (4). Now we need to specify the class of discriminatorsD to solve the classification problem between N(η, Ip) and the empirical distribution 1n ∑n i=1 δXi . One of the simplest discriminator classes is the logistic regression, D = { D(x) = sigmoid(wTx+ b) : w ∈ Rp, b ∈ R } . (13) With D(x) = sigmoid(wTx+ b) = (1 + e−w T x−b)−1 in (13), the procedure (12) can be viewed as a smoothed version of TV-Learning (8). To be specific, the sigmoid function sigmoid(wTx + b) tends to an indicator function as ‖w‖ → ∞, which leads to a procedure very similar to (9). In fact, the class (13) is richer than the one used in (9), and thus (12) can be understood as the minimizer of a sharper variational lower bound than that of (9). Theorem 3.1. Assume pn + 2 ≤ c for some sufficiently small constant c > 0. With i.i.d. observations X1, ..., Xn ∼ (1− )N(θ, Ip) + Q, the estimator θ̂ defined by (12) satisfies ‖θ̂ − θ‖2 ≤ C ( p n ∨ 2 ) , with probability at least 1 − e−C′(p+n 2) uniformly over all θ ∈ Rp and all Q. The constants C,C ′ > 0 are universal. Though TV-GAN can achieve the minimax rate pn ∨ 2 under Huber’s contamination model, it may suffer from optimization difficulties especially when the distributions Q and N(θ, Ip) are far away from each other, as shown in Figure 1. 3.2 RESULTS FOR JS-GAN Given the intractable optimization property of TV-GAN, we next turn to Jensen-Shannon GAN (JS-GAN) with f(x) = x log x− (x+ 1) log x+12 . The estimator is defined by θ̂ = argmin η∈Rp max D∈D [ 1 n n∑ i=1 logD(Xi) + EN(η,Ip) log(1−D(Xi)) ] + log 4, (14) with T (x) = logD(x) in (4). This is exactly the original GAN (Goodfellow et al., 2014) specialized to the normal mean estimation problem. The advantages of JS-GAN over other forms of GAN have been studied extensively in the literature (Lucic et al., 2017; Kurach et al., 2018). Unlike TV-GAN, our experiment results show that (14) with the logistic regression discriminator class (13) is not robust to contamination. However, if we replace (13) by a neural network class with one or more hidden layers, the estimator will be robust and will also work very well numerically. To understand why and how the class of the discriminators affects the robustness property of JS-GAN, we introduce a new concept called restricted Jensen-Shannon divergence. Let g : Rp → Rd be a function that maps a p-dimensional observation to a d-dimensional feature space. The restricted Jensen-Shannon divergence between two probability distributions P and Q with respect to the feature g is defined as JSg(P,Q) = max w∈W [ EP log sigmoid(w T g(X)) + EQ log(1− sigmoid(wT g(X))) ] + log 4. In other words, P and Q are distinguished by a logistic regression classifier that uses the feature g(X). It is easy to see that JSg(P,Q) is a variational lower bound of the original Jensen-Shannon divergence. The key property of JSg(P,Q) is given by the following proposition. Proposition 3.1. AssumeW is a convex set that contains an open neighborhood of 0. Then, JSg(P,Q) = 0 if and only if EP g(X) = EQg(X). The proposition asserts that JSg(·, ·) cannot distinguish P and Q if the feature g(X) has the same expected value under the two distributions. This generalized moment matching effect has also been studied by Liu et al. (2017) for general f -GANs. However, the linear discriminator class considered in Liu et al. (2017) is parameterized in a different way compared with the discriminator class here. When we apply Proposition 3.1 to robust mean estimation, the JS-GAN is trying to match the values of 1 n ∑n i=1 g(Xi) and EN(η,Iη)g(X) for the feature g(X) used in the logistic regression classifier. This explains what we observed in our numerical experiments. A neural net without any hidden layer is equivalent to a logistic regression with a linear feature g(X) = (XT , 1)T ∈ Rp+1. Therefore, whenever η = 1n ∑n i=1Xi, we have JSg ( 1 n ∑n i=1 δXi , N(η, Ip) ) = 0, which implies that the sample mean is a global maximizer of (14). On the other hand, a neural net with at least one hidden layers involves a nonlinear feature function g(X), which is the key that leads to the robustness of (14). We will show rigorously that a neural net with one hidden layer is sufficient to make (14) robust and optimal. Consider the following class of discriminators, D = D(x) = sigmoid ∑ j≥1 wjσ(u T j x+ bj) : ∑ j≥1 |wj | ≤ κ, uj ∈ Rp, bj ∈ R . (15) The class (15) consists of two-layer neural network functions. While the dimension of the input layer is p, the dimension of the hidden layer can be arbitrary, as long as the weights have a bounded `1 norm. The nonlinear activation function σ(·) is allowed to take 1) indicator: σ(x) = I{x ≥ 1}, 2) sigmoid: σ(x) = 11+e−x , 3) ramp: σ(x) = max(min(x + 1/2, 1), 0). Other bounded activation functions are also possible, but we do not exclusively list them. The rectified linear unit (ReLU) will be studied in Appendix A. Theorem 3.2. Consider the estimator θ̂ defined by (14) with D specified by (15). Assume pn + 2 ≤ c for some sufficiently small constant c > 0, and set κ = O (√ p n + ) . With i.i.d. observations X1, ..., Xn ∼ (1− )N(θ, Ip) + Q, we have ‖θ̂ − θ‖2 ≤ C ( p n ∨ 2 ) , with probability at least 1 − e−C′(p+n 2) uniformly over all θ ∈ Rp and all Q. The constants C,C ′ > 0 are universal. 4 ELLIPTICAL DISTRIBUTIONS An advantage of Tukey’s median (9) is that it leads to optimal robust location estimation under general elliptical distributions such as Cauchy distribution whose mean does not exist. In this section, we show that JS-GAN shares the same property. A random vectorX ∈ Rp follows an elliptical distribution if it admits a representation X = θ + ξAU, where U is uniformly distributed on the unit sphere {u ∈ Rp : ‖u‖ = 1} and ξ ≥ 0 is a random variable independent of U that determines the shape of the elliptical distribution (Fang, 2017). The center and the scatter matrix is θ and Σ = AAT . For a unit vector v, let the density function of ξvTU be h. Note that h is independent of v because of the symmetry of U . Then, there is a one-to-one relation between the distribution of ξ and h, and thus the triplet (θ,Σ, h) fully parametrizes an elliptical distribution. Note that h and Σ = AAT are not identifiable, because ξA = (cξ)(c−1A) for any c > 0. Therefore, without loss of generality, we can restrict h to be a member of the following class H = { h : h(t) = h(−t), h ≥ 0, ∫ h = 1, ∫ σ(t)(1− σ(t))h(t)dt = 1 } . This makes the parametrization (θ,Σ, h) of an elliptical distribution fully identifiable, and we use EC(θ,Σ, h) to denote an elliptical distribution parametrized in this way. The JS-GAN estimator is defined as (θ̂, Σ̂, ĥ) = argmin η∈Rp,Γ∈Ep(M),g∈H max D∈D [ 1 n n∑ i=1 logD(Xi) + EEC(η,Γ,g) log(1−D(X)) ] + log 4, (16) where Ep(M) is the set of all positive semi-definite matrix with spectral norm bounded by M . Theorem 4.1. Consider the estimator θ̂ defined above withD specified by (15). AssumeM = O(1), pn+ 2 ≤ c for some sufficiently small constant c > 0, and set κ = O (√ p n + ) . With i.i.d. observations X1, ..., Xn ∼ (1− )EC(θ,Σ, h) + Q, we have ‖θ̂ − θ‖2 ≤ C ( p n ∨ 2 ) , with probability at least 1 − e−C′(p+n 2) uniformly over all θ ∈ Rp, Σ ∈ Ep(M) and all Q. The constants C,C ′ > 0 are universal. Remark 4.1. The result of Theorem 4.1 also holds (and is proved) under the strong contamination model (Diakonikolas et al., 2016a). That is, we have i.i.d. observations X1, ..., Xn ∼ P for some P satisfying TV(P,EC(θ,Σ, h)) ≤ . See its proof in Appendix D.2. Note that Theorem 4.1 guarantees the same convergence rate as in the Gaussian case for all elliptical distributions. This even includes multivariate Cauchy where mean does not exist. Therefore, the location estimator (16) is fundamentally different from Diakonikolas et al. (2016a); Lai et al. (2016), which is only designed for robust mean estimation. We will show such a difference in our numerical results. To achieve rate-optimality for robust location estimation under general elliptical distributions, the estimator (16) is different from (14) only in the generator class. They share the same discriminator class (15). This underlines an important principle for designing GAN estimators: the overall statistical complexity of the estimator is only determined by the discriminator class. The estimator (16) also outputs (Σ̂, ĥ), but we do not claim any theoretical property for (Σ̂, ĥ) in this paper. This will be systematically studied in a future project. 5 NUMERICAL EXPERIMENTS In this section, we give extensive numerical studies of robust mean estimation via GAN. After introducing the implementation details in Section 5.1, we verify our theoretical results on minimax estimation with both TVGAN and JS-GAN in Section 5.2. Comparison with other methods on robust mean estimation in the literature is given in Section 5.3. The effects of various network structures are studied in Section 5.4. Adaptation to unknown covariance is studied in Section 5.5. In all these cases, we assume i.i.d. observations are drawn from (1− )N(0p, Ip) + Q with and Q to be specified. Finally, adaptation to elliptical distributions is studied in Section 5.6. 5.1 IMPLEMENTATIONS We adopt the standard algorithmic framework of f -GANs (Nowozin et al., 2016) for the implementation of JSGAN and TV-GAN for robust mean estimation. In particular, the generator for mean estimation is Gη(Z) = Z + η with Z ∼ N(0p, Ip); the discriminator D is a multilayer perceptron (MLP), where each layer consisting of a linear map and a sigmoid activation function and the number of nodes will vary in different experiments to be specified below. Details related to algorithms, tuning, critical hyper-parameters, structures of discriminator networks and other training tricks for stabilization and acceleration are discussed in Appendix B.1. A PyTorch implementation is available at https://github.com/zhuwzh/Robust-GAN-Center. 5.2 NUMERICAL SUPPORTS FOR THE MINIMAX RATES We verify the minimax rates achieved by TV-GAN (Theorem 3.1) and JS-GAN (Theorem 3.2) via numerical experiments. Two main scenarios we consider here are √ p/n < and √ p/n > , where in both cases, various types of contamination distributions Q are considered. Specifically, the choice of contamination distributions Q includes N(µ ∗ 1p, Ip) with µ ranges in {0.2, 0.5, 1, 5}, N(0.5 ∗ 1p,Σ) and Cauchy(τ ∗ 1p). Details of the construction of the covariance matrix Σ is given in Appendix B.2. The distribution Cauchy(τ ∗ 1p) is obtained by combining p independent one-dimensional standard Cauchy with location parameter τj = 0.5. The main experimental results are summarized in Figure 2, where the `2 error we present is the maximum error among all choices of Q, and detailed numerical results can be founded in Tables 7, 8 and 9 in Appendix. We separately explore the relation between the error and one of , √ p and 1/ √ n with the other two parameters fixed. The study of the relation between the `2 error and is in the regime √ p/n < so that dominates the minimax rate. The scenario √ p/n > is considered in the study of the effects of √ p and 1/ √ n. As is shown in Figure 2, the errors are approximately linear against the corresponding parameters in all cases, which empirically verifies the conclusions of Theorem 3.1 and Theorem 3.2. 5.3 COMPARISONS WITH OTHER METHODS We perform additional experiments to compare with other methods including dimension halving (Lai et al., 2016) and iterative filtering (Diakonikolas et al., 2017) under various settings. We emphasize that our method does not require any knowledge about the nuisance parameters such as the contamination proportion . Tuning GAN is only a matter of optimization and one can tune parameters based on the objective function only. Table 1 shows the performances of JS-GAN, TV-GAN, dimension halving, and iterative filtering. The network structure, for both JS-GAN and TV-GAN, has one hidden layer with 20 hidden units when the sample size is 50,000 and 2 hidden units when sample size is 5,000. The critical hyper-parameters we apply is given in Appendix and it turns out that the choice of the hyper-parameter is robust against different models when the net structures are the same. To summarize, our method outperforms other algorithms in most cases. TV-GAN is good at cases when Q and N(0p, Ip) are non-separable but fails when Q is far away from N(0p, Ip) due to optimization issues discussed in Section 3.1 (Figure 1). On the other hand, JS-GAN stably achieves the lowest error in separable cases and also shows competitive performances for non-separable ones. 5.4 NETWORK STRUCTURES We further study the performance of JS-GAN with various structures of neural networks. The main observation is tuning networks with one-hidden layer becomes tough as the dimension grows (e.g. p ≥ 200), while a deeper network can significantly refine the situation perhaps by improving the landscape. Some experiment results are given in Table 2. On the other hand, one-hidden layer performs not worse than deeper networks when dimension is not very large (e.g. p ≤ 100). More experiments are given in Appendix B.4. Additional theoretical results for deep neural nets are given in Appendix A. 5.5 ADAPTATION TO UNKNOWN COVARIANCE The robust mean estimator constructed through JS-GAN can be easily made adaptive to unknown covariance structure, which is a special case of (16). We define (θ̂, Σ̂) = argmin η∈Rp,Γ∈Ep max D∈D [ 1 n n∑ i=1 logD(Xi) + EN(η,Γ) log(1−D(Xi)) ] + log 4, The estimator θ̂, as a result, is rate-optimal even when the true covariance matrix is not necessarily identity and is unknown (see Theorem 4.1). Below, we demonstrate some numerical evidence of the optimality of θ̂ as well as the error of Σ̂ in Table 3. 5.6 ADAPTATION TO ELLIPTICAL DISTRIBUTIONS We consider the estimation of the location parameter θ in elliptical distribution EC(θ,Σ, h) by the JS-GAN defined in (16). In particular, we study the case with i.i.d. observationsX1, ..., Xn ∼ (1− )Cauchy(θ, Ip)+ Q. The density function of Cauchy(θ,Σ) is given by p(x; θ,Σ) ∝ |Σ|−1/2 ( 1 + (x− θ)TΣ−1(x− θ) )−(1+p)/2 . Compared with Algorithm (1), the difference lies in the choice of the generator. We consider the generator G1(ξ, U) = gω(ξ)U + θ, where gω(ξ) is a non-negative neural network parametrized by ω and some random variable ξ. The random vector U is sampled from the uniform distribution on {u ∈ Rp : ‖u‖ = 1}. If the scatter matrix is unknown, we will use the generatorG2(ξ, U) = gω(ξ)AU+θ, withAAT modeling the scatter matrix. Table 4 shows the comparison with other methods. Our method still works well under Cauchy distribution, while the performance of other methods that rely on moment conditions deteriorates in this setting. ACKNOWLEDGEMENT The research of Chao Gao was supported in part by NSF grant DMS-1712957 and NSF Career Award DMS1847590. The research of Yuan Yao was supported in part by Hong Kong Research Grant Council (HKRGC) grant 16303817, National Basic Research Program of China (No. 2015CB85600), National Natural Science Foundation of China (No. 61370004, 11421110001), as well as awards from Tencent AI Lab, Si Family Foundation, Baidu Big Data Institute, and Microsoft Research-Asia. A ADDITIONAL THEORETICAL RESULTS In this section, we investigate the performance of discriminator classes of deep neural nets with the ReLU activation function. Since our goal is to learn a p-dimensional mean vector, a deep neural network discriminator without any regularization will certainly lead to overfitting. Therefore, it is crucial to design a network class with some appropriate regularizations. Inspired by the work of Bartlett (1997); Bartlett & Mendelson (2002), we consider a network class with `1 regularizations on all layers except for the second last layer with an `2 regularization. With GH1 (B) = { g(x) = ReLU(vTx) : ‖v‖1 ≤ B } , a neural network class with l + 1 layers is defined as GHl+1(B) = { g(x) = ReLU ( H∑ h=1 vhgh(x) ) : H∑ h=1 |vh| ≤ B, gh ∈ GHl (B) } . Combining with the last sigmoid layer, we obtain the following discriminator class, FHL (κ, τ,B) = { D(x) = sigmoid ∑ j≥1 wjsigmoid ( 2p∑ h=1 ujhgjh(x) + bj ) : ∑ j≥1 |wj | ≤ κ, 2p∑ h=1 u2jh ≤ 2, |bj | ≤ τ, gjh ∈ GHL−1(B) } . Note that all the activation functions are ReLU(·) except that we use sigmoid(·) in the last layer in the feature map g(·). A theoretical guarantees of the class defined above is given by the following theorem. Theorem A.1. Assume p log pn ∨ 2 ≤ c for some sufficiently small constant c > 0. Consider i.i.d. observations X1, ..., Xn ∼ (1− )N(θ, Ip) + Q and the estimator θ̂ defined by (14) with D = FHL (κ, τ,B) with H ≥ 2p, 2 ≤ L = O(1), 2 ≤ B = O(1), and τ = √ p log p. We set κ = O (√ p log p n + ) . Then, for the estimator θ̂ defined by (14) with D = FHL (κ, τ,B), we have ‖θ̂ − θ‖2 ≤ C ( p log p n ∨ 2 ) , with probability at least 1− e−C′(p log p+n 2) uniformly over all θ ∈ Rp such that ‖θ‖∞ ≤ √ log p and all Q. The theorem shows that JS-GAN with a deep ReLU network can achieve the error rate p log pn ∨ 2 with respect to the squared `2 loss. The condition ‖θ‖∞ ≤ √ log p for the ReLU network can be easily satisfied with a simple preprocessing step. We split the data into two halves, whose sizes are log n and n− log n, respectively. Then, we calculate the coordinatewise median θ̃ using the small half. It is easy to show that ‖θ̃ − θ‖∞ ≤ √ log p logn ∨ with high probability. Then, for each Xi from the second half, the conditional distribution of Xi − θ̃ given the first half is (1− )N(θ− θ̃, Ip)+ Q̃. Since √ log p logn ∨ ≤ √ log p, the condition ‖θ− θ̃‖∞ ≤ √ log p is satisfied, and thus we can apply the estimator (14) using the shifted data Xi − θ̃ from the second half. The theoretical guarantee of Theorem A.1 will be ‖θ̂ − (θ − θ̃)‖2 ≤ C ( p log p n ∨ 2 ) , with high probability. Hence, we can use θ̂ + θ̃ as the final estimator to achieve the same rate in Theorem A.1. On the other hand, our experiments show that this preprocessing step is not needed. We believe that the assumption ‖θ‖∞ ≤ √ log p is a technical artifact in the analysis of the Rademacher complexity. It can probably be dropped by a more careful analysis. B DETAILS OF EXPERIMENTS B.1 TRAINING DETAILS The implementation for JS-GAN is given in Algorithm 1, and a simple modification of the objective function leads to that of TV-GAN. Algorithm 1 JS-GAN: argminη maxw[ 1n ∑n i=1 logDw(Xi) + E log(1−Dw(Gη(Z)))] Input: Observation set S = {X1, . . . , Xn} ∈ Rp, discriminator network Dw(x), generator network Gη(z) = z+η, learning rates γd and γg for the discriminator and the generator, batch sizem, discriminator steps in each iteration K, total epochs T , average epochs T0. Initialization: Initialize η with coordinatewise median of S. Initialize w withN(0, .05) independently on each element or Xavier (Glorot & Bengio, 2010). 1: for t = 1, . . . , T do 2: for k = 1, . . . ,K do 3: Sample mini-batch {X1, . . . , Xm} from S. Sample {Z1, . . . , Zm} from N(0, Ip) 4: gw ← ∇w[ 1mΣ m i=1 logDw(Xi) + 1 mΣ m i=1 log(1−Dw(Gη(Zi)))] 5: w ← w + γdgw 6: end for 7: Sample {Z1, . . . , Zm} from N(0, Ip) 8: gη ← ∇η[ 1mΣ m i=1 log(1−Dw(Gη(Zi)))] 9: η ← η − γggη 10: end for Return: The average estimate η over the last T0 epochs. Several important implementation details are discussed below. • How to tune parameters? The choice of learning rates is crucial to the convergence rate, but the minimax game is hard to evaluate. We propose a simple strategy to tune hyper-parameters including the learning rates. Suppose we have estimators θ̂1, . . . , θ̂M with corresponding discriminator networks Dŵ1 ,. . . , DŵM . Fixing η = θ̂, we further apply gradient descent to Dw with a few more epochs (but not many in order to prevent overfitting, for example 10 epochs) and select the θ̂ with the smallest value of the objective function (14) (JS-GAN) or (12) (TV-GAN). We note that training discriminator and generator alternatively usually will not suffer from overfitting since the objective function for either the discriminator or the generator is always changing. However, we must be careful about the overfitting issue when training the discriminator alone with a fixed η, and that is why we apply an early stopping strategy here. Fortunately, the experiments show if the structures of networks are same (then of course, the dimensions of the inputs are same), the choices of hyper-parameters are robust to different models and we present the critical parameters in Table 5 to reproduce the experiment results in Table 1 and Table 2. • When to stop training? Judging convergence is a difficult task in GAN trainings, since sometimes oscillation may occur. In computer vision, people often use a task related measure and stop training once the requirement based on the measure is achieved. In our experiments below, we simply use a sufficiently large T which works well, but it is still interesting to explore an efficient early stopping rule in the future work. • How to design the network structure? Although Theorem 3.1 and Theorem 3.2 guarantee the minimax rates of TV-GAN without hidden layer and JS-GAN with one hidden layer, one may wonder whether deeper network structures will perform better. From our preliminary experiments, TV-GAN with one hidden layer is significantly better than TV-GAN without any hidden layer. Moreover, JS-GAN 200-200-100-1 JS 0.005 0.1 2 200 25 0 B.2 SETTINGS OF CONTAMINATION Q We introduce the contamination distributions Q used in the experiments. We first consider Q = N(µ, Ip) with µ ranges in {0.2, 0.5, 1, 5}. Note that the total variation distance between N(0p, Ip) and N(µ, Ip) is of order ‖0p − µ‖ = ‖µ‖. We hope to use different levels of ‖µ‖ to test the algorithm and verify the error rate in the worst case. Second, we consider Q = N(1.5 ∗ 1p,Σ) to be a Gaussian distribution with a non-trivial covariance matrix Σ. The covariance matrix is generated according to the following steps. First generate a sparse precision matrix Γ = (γij) with each entry γij = zij ∗ τij , i ≤ j, where zij and τij are independently generated from Uniform(0.4, 0.8) and Bernoulli(0.1). We then define γij = γji for all i > j and Γ̄ = Γ + (|min eig(Γ)| + 0.05)Ip to make the precision matrix symmetric and positive definite, where min eig(Γ) is the smallest eigenvalue of Γ. The covariance matrix is Σ = Γ̄−1. Finally, we consider Q to be a Cauchy distribution with independent component, and the jth component takes a standard Cauchy distribution with location parameter τj = 0.5. B.3 COMPARISON DETAILS In Section 5.3, we compare GANs with the dimension halving (Lai et al., 2016) and iterative filtering (Diakonikolas et al., 2017). • Dimension Halving. Experiments conducted are based on the code from https://github.com/ kal2000/AgnosticMeanAndCovarianceCode. The only hyper-parameter is the threshold in the outlier removal step, and we take C = 2 as suggested in the file outRemSperical.m. • Iterative Filtering. Experiments conducted are based on the code from https://github.com/ hoonose/robust-filter. We assume is known and take other hyper-parameters as suggested in the file filterGaussianMean.m. B.4 SUPPLEMENTARY EXPERIMENTS FOR NETWORK STRUCTURES The experiments are conducted with i.i.d. observations drawn from (1− )N(0p, Ip) + N(0.5 ∗ 1p, Ip) with = 0.2. Table 6 summarizes results for p = 100, n ∈ {5000, 50000} and various network structures. We observe that TV-GAN that uses neural nets with one hidden layer improves over the performance of that without any hidden layer. This indicates that the landscape of TV-GAN might be improved by a more complicated network structure. However, adding one more layer does not improve the results. For JS-GAN, we omit the results without hidden layer because of its lack of robustness (Proposition 3.1). Deeper networks sometimes improve over shallow networks, but this is not always true. We also observe that the optimal choice of the width of the hidden layer depends on the sample size. B.5 TABLES FOR TESTING THE MINIMAX RATES Tables 7, 8 and 9 show numerical results corresponding to Figure 2. C PROOFS OF PROPOSITION 2.1 AND PROPOSITION 3.1 In the first example, consider Q = {N(η, Ip) : η ∈ Rp} , Q̃η = {N(η̃, Ip) : ‖η̃ − η‖ ≤ r} . In other words,Q is the class of Gaussian location family, and Q̃η is taken to be a subset in a local neighborhood of N(η, Ip). Then, with Q = N(η, Ip) and Q̃ = N(η̃, Ip), the event q̃(X)/q(X) ≥ 1 is equivalent to ‖X − η̃‖2 ≤ ‖X − η‖2. Since ‖η̃ − η‖ ≤ r, we can write η̃ = η + r̃u for some r̃ ∈ R and u ∈ Rp that satisfy 0 ≤ r̃ ≤ r and ‖u‖ = 1. Then, (8) becomes θ̂ = argmin η∈Rp sup ‖u‖=1 0≤r̃≤r [ 1 n n∑ i=1 I { uT (Xi − η) ≥ r̃ 2 } − P ( N(0, 1) ≥ r̃ 2 )] . (18) Letting r → 0, we obtain (9), the exact formula of Tukey’s median. The next example is a linear model y|X ∼ N(XT θ, 1). Consider the following classes Q = { Py,X = Py|XPX : Py|X = N(X T η, 1), η ∈ Rp } , Q̃η = { Py,X = Py|XPX : Py|X = N(X T η̃, 1), ‖η̃ − η‖ ≤ r } . Here, Py,X stands for the joint distribution of y and X . The two classes Q and Q̃ share the same marginal distribution PX and the conditional distributions are specified by N(XT η, 1) and N(XT η̃, 1), respectively. Follow the same derivation of Tukey’s median, let r → 0, and we obtain the exact formula of regression depth (10). It is worth noting that the derivation of (10) does not depend on the marginal distribution PX . The last example is on covariance/scatter matrix estimation. For this task, we set Q = {N(0,Γ) : Γ ∈ Ep}, where Ep is the class of all p×p covariance matrices. Inspired by the derivations of Tukey depth and regression depth, it is tempting to choose Q̃ in the neighborhood of N(0,Γ). However, a native choice would lead to a definition that is not even Fisher consistent. We propose a rank-one neighborhood, given by Q̃Γ = { N(0, Γ̃) : Γ̃−1 = Γ−1 + r̃uuT ∈ Ep, |r̃| ≤ r, ‖u‖ = 1 } . (19) Then, a direct calculation gives I { dN(0, Γ̃) dN(0,Γ) (X) ≥ 1 } = I { r̃|uTX|2 ≤ log(1 + r̃uTΓu) } . (20) Since limr̃→0 log(1+r̃uTΓu) r̃uTΓu = 1, the limiting event of (20) is either I{|uTX|2 ≤ uTΓu} or I{|uTX|2 ≥ uTΓu}, depending on whether r̃ tends to zero from left or from right. Therefore, with the aboveQ and Q̃Γ, (8) becomes (11) under the limit r → 0. Even though the definition of (19) is given by a rank-one neighborhood of the inverse covariance matrix, the formula (11) can also be derived with Γ̃−1 = Γ−1 + r̃uuT in (19) replaced by Γ̃ = Γ + r̃uuT by applying the Sherman-Morrison formula. A similar formula to (11) in the literature is given by Σ̂ = argmax Γ∈Ep inf ‖u‖=1 [ 1 n n∑ i=1 I{|uTXi|2 ≤ βuTΓu} ∧ 1 n n∑ i=1 I{|uTXi|2 ≥ βuTΓu} ] , (21) which is recognized as the maximizer of what is known as the matrix depth function (Zhang, 2002; Chen et al., 2018; Paindaveine & Van Bever, 2017). The β in (21) is a scalar defined through the equation P(N(0, 1) ≤ √ β) = 3/4. It is proved in Chen et al. (2018) that Σ̂ achieves the minimax rate under Huber’s -contamination model. While the formula (11) can be derived from TV-Learning with discriminators in the form of I { dN(0,Γ̃) dN(0,Γ) (X) ≥ 1 } , a special case of (6), the formula (21) can be derived directly from TV- GAN with discriminators in the form of I { dN(0,βΓ̃) dN(0,βΓ) (X) ≥ 1 } by following a similar rank-one neighborhood argument. This completes the derivation of Proposition 2.1. To prove Proposition 3.1, we define F (w) = EP log sigmoid(wT g(X)) + EQ log(1 − sigmoid(wT g(X))) + log 4, so that JSg(P,Q) = maxw∈W F (w). The gradient and Hessian of F (w) are given by ∇F (w) = EP e−w T g(X) 1 + e−wT g(X) g(X)− EQ ew T g(X) 1 + ewT g(X) g(X), ∇2F (w) = −EP ew T g(X) (1 + ewT g(X))2 g(X)g(X)T − EQ e−w T g(X) (1 + e−wT g(X))2 g(X)g(X)T . Therefore, F (w) is concave in w, and maxw∈W F (w) is a convex optimization with a convex W . Suppose JSg(P,Q) = 0. Then maxw∈W F (w) = 0 = F (0), which implies ∇F (0) = 0, and thus we have EP g(X) = EQg(X). Now suppose EP g(X) = EQg(X), which is equivalent to ∇F (0) = 0. Therefore, w = 0 is a stationary point of a concave function, and we have JSg(P,Q) = maxw∈W F (w) = F (0) = 0. D PROOFS OF MAIN RESULTS In this section, we present proofs of all main theorems in the paper. We first establish some useful lemmas in Section D.1, and the the proofs of main theorems will be given in Section D.2. D.1 SOME AUXILIARY LEMMAS Lemma D.1. Given i.i.d. observations X1, ..., Xn ∼ P and the function class D defined in (13), we have for any δ > 0, sup D∈D ∣∣∣∣∣ 1n n∑ i=1 D(Xi)− ED(X) ∣∣∣∣∣ ≤ C (√ p n + √ log(1/δ) n ) , with probability at least 1− δ for some universal constant C > 0. Proof. Let f(X1, ..., Xn) = supD∈D ∣∣ 1 n ∑n i=1D(Xi)− ED(X) ∣∣. It is clear that f(X1, ..., Xn) satisfies the bounded difference condition. By McDiarmid’s inequality (McDiarmid, 1989), we have f(X1, ..., Xn) ≤ Ef(X1, ..., Xn) + √ log(1/δ) 2n , with probability at least 1 − δ. Using a standard symmetrization technique (Pollard, 2012), we obtain the following bound that involves Rademacher complexity, Ef(X1, ..., Xn) ≤ 2E sup D∈D ∣∣∣∣∣ 1n n∑ i=1 iD(Xi) ∣∣∣∣∣ , (22) where 1, ..., n are independent Rademacher random variables. The Rademacher complexity can be bounded by Dudley’s integral entropy bound, which gives E sup D∈D ∣∣∣∣∣ 1n n∑ i=1 iD(Xi) ∣∣∣∣∣ . E 1√n ∫ 2 0 √ logN (δ,D, ‖ · ‖n)dδ, where N (δ,D, ‖ · ‖n) is the δ-covering number of D with respect to the empirical `2 distance ‖f − g‖n = √ 1 n ∑n i=1(f(Xi)− g(Xi))2. Since the VC-dimension of D is O(p), we have N (δ,D, ‖ · ‖n) . p (16e/δ)O(p) (see Theorem 2.6.7 of Van Der Vaart & Wellner (1996)). This leads to the bound 1√ n ∫ 2 0 √ logN (δ,D, ‖ · ‖n)dδ . √ p n , which gives the desired result. Lemma D.2. Given i.i.d. observations X1, ..., Xn ∼ P, and the function class D defined in (15), we have for any δ > 0, sup D∈D ∣∣∣∣∣ 1n n∑ i=1 logD(Xi)− E logD(X) ∣∣∣∣∣ ≤ Cκ (√ p n + √ log(1/δ) n ) , with probability at least 1− δ for some universal constant C > 0. Proof. Let f(X1, ..., Xn) = supD∈D ∣∣ 1 n ∑n i=1 logD(Xi)− E logD(X) ∣∣. Since sup D∈D sup x | log(2D(x))| ≤ κ, we have sup x1,...,xn,x′i |f(x1, ..., xn)− f(x1, ..., xi−1, x′i, xi+1, ..., xn)| ≤ 2κ n . Therefore, by McDiarmid’s inequality (McDiarmid, 1989), we have f(X1, ..., Xn) ≤ Ef(X1, ..., Xn) + κ √ 2 log(1/δ) n , (23) with probability at least 1−δ. By the same argument of (22), it is sufficient to bound the Rademacher complexity E supD∈D ∣∣ 1 n ∑n i=1 i log(2D(Xi)) ∣∣. Since the function ψ(x) = log(2sigmoid(x)) has Lipschitz constant 1 and satisfies ψ(0) = 0, we have E sup D∈D ∣∣∣∣∣ 1n n∑ i=1 i log(2D(Xi)) ∣∣∣∣∣ ≤ 2E sup∑ j≥1 |wj |≤κ,uj∈Rp,bj∈R ∣∣∣∣∣∣ 1n n∑ i=1 i ∑ j≥1 wjσ(u T j Xi + bj) ∣∣∣∣∣∣ , which uses Theorem 12 of Bartlett & Mendelson (2002). By Hölder’s inequality, we further have E sup∑ j≥1 |wj |≤κ,uj∈Rp,bj∈R ∣∣∣∣∣∣ 1n n∑ i=1 i ∑ j≥1 wjσ(u T j Xi + bj) ∣∣∣∣∣∣ ≤ κEmax j≥1 sup uj∈Rp,bj∈R ∣∣∣∣∣ 1n n∑ i=1 iσ(u T j Xi + bj) ∣∣∣∣∣ = κE sup u∈Rp,b∈R ∣∣∣∣∣ 1n n∑ i=1 iσ(u TXi + b) ∣∣∣∣∣ . Note that for a monotone function σ : R→ [0, 1], the VC-dimension of the class {σ(uTx+ b) : u ∈ R, b ∈ R} is O(p). Therefore, by using the same argument of Dudley’s integral entropy bound in the proof Lemma D.1, we have E sup u∈Rp,b∈R ∣∣∣∣∣ 1n n∑ i=1 iσ(u TXi + b) ∣∣∣∣∣ . √ p n , which leads to the desired result. Lemma D.3. Given i.i.d. observations X1, .., Xn ∼ N(θ, Ip) and the function class FHL (κ, τ,B). Assume ‖θ‖∞ ≤ √ log p and set τ = √ p log p. We have for any δ > 0, sup D∈FHL (κ,τ,B) ∣∣∣∣∣ 1n n∑ i=1 logD(Xi)− E logD(X) ∣∣∣∣∣ ≤ Cκ ( (2B)L−1 √ p log p n + √ log(1/δ) n ) , with probability at least 1− δ for some universal constants C > 0. Proof. Write f(X1, ..., Xn) = supD∈FHL (κ,τ,B) ∣∣ 1 n ∑n i=1 logD(Xi)− E logD(X) ∣∣. Then, the inequality (23) holds with probability at least 1 − δ. It is sufficient to analyze the Rademacher complexity. Using the fact that the function log(2sigmoid(x)) is Lipschitz and Hölder’s inequality, we have E sup D∈FHL (κ,τ,B) ∣∣∣∣∣ 1n n∑ i=1 i log(2D(Xi)) ∣∣∣∣∣ ≤ 2E sup ‖w‖1≤κ,‖uj∗‖2≤2,|bj |≤τ,gjh∈GHL−1(B) ∣∣∣∣∣∣ 1n n∑ i=1 i ∑ j≥1 wjsigmoid ( 2p∑ h=1 ujhgjh(Xi) + bj )∣∣∣∣∣∣ ≤ 2κE sup ‖u‖2≤2,|b|≤τ,gh∈GHL−1(B) ∣∣∣∣∣ 1n n∑ i=1 isigmoid ( 2p∑ h=1 uhgh(Xi) + b )∣∣∣∣∣ ≤ 4κE sup ‖u‖2≤2,|b|≤τ,gh∈GHL−1(B) ∣∣∣∣∣ 1n n∑ i=1 i ( 2p∑ h=1 uhgh(Xi) + b )∣∣∣∣∣ ≤ 8√pκE sup g∈GHL−1(B) ∣∣∣∣∣ 1n n∑ i=1 ig(Xi) ∣∣∣∣∣+ 4κτE ∣∣∣∣∣ 1n n∑ i=1 i ∣∣∣∣∣ . Now we use the notation Zi = Xi − θ ∼ N(0, Ip) for i = 1, ..., n. We bound E supg∈GHL−1(B) ∣∣ 1 n ∑n i=1 ig(Zi + θ) ∣∣ by induction. Since E ( sup g∈GH1 (B) 1 n n∑ i=1 ig(Zi + θ) ) ≤ E ( sup ‖v‖1≤B 1 n n∑ i=1 iv T (Zi + θ) ) ≤ B ( E ∣∣∣∣∣ 1n n∑ i=1 iZi ∣∣∣∣∣ ∞ + ‖θ‖∞E ∣∣∣∣∣ 1n n∑ i=1 i ∣∣∣∣∣ ) ≤ CB √ log p+ ‖θ‖∞√ n , and E ( sup g∈GHl+1(B) 1 n n∑ i=1 ig(Zi + θ) ) ≤ E ( sup ‖v‖1≤B,gh∈GHl (B) 1 n n∑ i=1 i H∑ h=1 vhgh(Zi + θ) ) ≤ BE ( sup g∈GHl (B) ∣∣∣∣∣ 1n n∑ i=1 ig(Zi + θ) ∣∣∣∣∣ ) ≤ 2BE ( sup g∈GHl (B) 1 n n∑ i=1 ig(Zi + θ) ) , we have E ( sup g∈GHL−1(B) 1 n n∑ i=1 ig(Zi + θ) ) ≤ C(2B)L−1 √ log p+ ‖θ‖∞√ n . Combining the above inequalities, we get E ( sup D∈FHL (κ,τ,B) 1 n n∑ i=1 i logD(Zi + θ) ) ≤ Cκ ( √ p(2B)L−1 √ log p+ ‖θ‖∞√ n + τ√ n ) . This leads to the desired result under the conditions on τ and ‖θ‖∞. D.2 PROOFS OF MAIN THEOREMS Proof of Theorem 3.1. We first introduce some notations. Define F (P, η) = maxw,b Fw,b(P, η), where Fw,b(P, η) = EP sigmoid(w TX + b)− EN(η,Ip)sigmoid(w TX + b). With this definition, we have θ̂ = argminη F (Pn, η), where we use Pn for the empirical distribution 1 n ∑n i=1 δXi . We shorthand N(η, Ip) by Pη , and then F (Pθ, θ̂) ≤ F ((1− )Pθ + Q, θ̂) + (24) ≤ F (Pn, θ̂) + + C (√ p n + √ log(1/δ) n ) (25) ≤ F (Pn, θ) + + C (√ p n + √ log(1/δ) n ) (26) ≤ F ((1− )Pθ + Q, θ) + + 2C (√ p n + √ log(1/δ) n ) (27) ≤ F (Pθ, θ) + 2 + 2C (√ p n + √ log(1/δ) n ) (28) = 2 + 2C (√ p n + √ log(1/δ) n ) . (29) With probability at least 1− δ, the above inequalities hold. We will explain each inequality. Since F ((1− )Pθ + Q, η) = max w,b [(1− )Fw,b(Pθ, η) + Fw,b(Q, η)] , we have sup η |F ((1− )Pθ + Q, η)− F (Pθ, η)| ≤ , which implies (24) and (28). The inequalities (25) and (27) are implied by Lemma D.1 and the fact that sup η |F (Pn, η)− F ((1− )Pθ + Q, η)| ≤ sup w,b ∣∣∣∣∣ 1n n∑ i=1 sigmoid(wTXi + b)− Esigmoid(wTX + b) ∣∣∣∣∣ . The inequality (26) is a direct consequence of the definition of θ̂. Finally, it is easy to see that F (Pθ, θ) = 0, which gives (29). In summary, we have derived that with probability at least 1− δ, Fw,b(Pθ, θ̂) ≤ 2 + 2C (√ p n + √ log(1/δ) n ) , for all w ∈ Rp and b ∈ R. For any u ∈ Rp such that ‖u‖ = 1, we take w = u and b = −uT θ, and we have f(0)− f(uT (θ − θ̂)) ≤ 2 + 2C (√ p n + √ log(1/δ) n ) , where f(t) = ∫ 1 1+ez+tφ(z)dz, with φ(·) being the probability density function of N(0, 1). It is not hard to see that as long as |f(t)− f(0)| ≤ c for some sufficiently small constant c > 0, then |f(t)− f(0)| ≥ c′|t| for some constant c′ > 0. This implies ‖θ̂ − θ‖ = sup ‖u‖=1 |uT (θ̂ − θ)| ≤ 1 c′ sup ‖u‖=1 ∣∣∣f(0)− f(uT (θ − θ̂))∣∣∣ . + √ p n + √ log(1/δ) n , with probability at least 1− δ. The proof is complete. Proof of Theorem 3.2. We continue to use Pη to denote N(η, Ip). Define F (P, η) = max ‖w‖1≤κ,u,b Fw,u,b(P, η), where Fw,u,b(P, η) = EP logD(X) + EN(η,Ip) log (1−D(X)) + log 4, with D(x) = sigmoid (∑ j≥1 wjσ(u T j x+ bj) ) . Then, F (Pθ, θ̂) ≤ F ((1− )Pθ + Q, θ̂) + 2κ (30) ≤ F (Pn, θ̂) + 2κ + Cκ (√ p n + √ log(1/δ) n ) (31) ≤ F (Pn, θ) + 2κ + Cκ (√ p n + √ log(1/δ) n ) (32) ≤ F ((1− )Pθ + Q, θ) + 2κ + 2Cκ (√ p n + √ log(1/δ) n ) (33) ≤ F (Pθ, θ) + 4κ + 2Cκ (√ p n + √ log(1/δ) n ) (34) = 4κ + 2Cκ (√ p n + √ log(1/δ) n ) . The inequalities (30)-(34) follow similar arguments for (24)-(28). To be specific, (31) and (33) are implied by Lemma D.2, and (32) is a direct consequence of the definition of θ̂. To see (30) and (34), note that for any w such that ‖w‖1 ≤ κ, we have | log(2D(X))| ≤ ∣∣∣∣∣∣ ∑ j≥1 wjσ(u T j X + bj) ∣∣∣∣∣∣ ≤ κ. A similar argument gives the same bound for | log(2(1−D(X)))|. This leads to sup η |F ((1− )Pθ + Q, η)− F (Pθ, η)| ≤ 2κ , (35) which further implies (30) and (34). To summarize, we have derived that with probability at least 1− δ, Fw,u,b(Pθ, θ̂) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) , for all ‖w‖1 ≤ κ, ‖uj‖ ≤ 1 and bj . Take w1 = κ, wj = 0 for all j > 1, u1 = u for some unit vector u and b1 = −uT θ, and we get fuT (θ̂−θ)(κ) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) , (36) where fδ(t) = E log 2 1 + e−tσ(Z) + E log 2 1 + etσ(Z+δ) , (37) with Z ∼ N(0, 1). Direct calculations give f ′δ(t) = E e−tσ(Z) 1 + e−tσ(Z) σ(Z)− E e tσ(Z+δ) 1 + etσ(Z+δ) σ(Z + δ), f ′′δ (t) = −Eσ(Z)2 e−tσ(Z) (1 + e−tσ(Z))2 − Eσ(Z + δ)2 e tσ(Z+δ) (1 + etσ(Z+δ))2 . (38) Therefore, fδ(0) = 0, f ′δ(0) = 1 2 (Eσ(Z)− Eσ(Z + δ)), and f ′′ δ (t) ≥ − 12 . By the inequality fδ(κ) ≥ fδ(0) + κf ′δ(0)− 1 4 κ2, we have κf ′δ(0) ≤ fδ(κ) + κ2/4. In view of (36), we have κ 2 (∫ σ(z)φ(z)dz − ∫ σ(z + uT (θ̂ − θ))φ(z)dz ) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) + κ2 4 . It is easy to see that for the choices of σ(·), ∫ σ(z)φ(z)dz − ∫ σ(z + t)φ(z)dz is locally linear with respect to t. This implies that κ‖θ̂ − θ‖ = κ sup ‖u‖=1 uT (θ̂ − θ) . κ ( + √ p n + √ log(1/δ) n ) + κ2. Therefore, with a κ . √ p n + , the proof is complete. Proof of Theorem 4.1. We use Pθ,Σ,h to denote the elliptical distribution EC(θ,Σ, h). Define F (P, (η,Γ, g)) = max ‖w‖1≤κ,u,b Fw,u,b(P, (η,Γ, g)), where Fw,u,b(P, (η,Γ, g)) = EP logD(X) + EEC(η,Γ,g) log (1−D(X)) + log 4, with D(x) = sigmoid (∑ j≥1 wjσ(u T j x+ bj) ) . Let P be the data generating process that satisfies TV(P, Pθ,Σ,h) ≤ , and then there exist probability distributions Q1 and Q2, such that P + Q1 = Pθ,Σ,h + Q2. The explicit construction of Q1, Q2 is given in the proof of Theorem 5.1 of Chen et al. (2018). This implies that |F (P, (η,Γ, g))− F (Pθ,Σ,h, (η,Γ, g))| ≤ sup ‖w‖1≤κ,u,b |Fw,u,b(P, (η,Γ, g))− Fw,u,b(Pθ,Σ,h, (η,Γ, g))| = sup ‖w‖1≤κ,u,b |EQ2 log(2D(X))− EQ1 log(2D(X))| ≤ 2κ . (39) Then, the same argument in Theorem 3.2 (with (35) replaced by (39)) leads to the fact that with probability at least 1− δ, Fw,u,b(Pθ,Σ,h, (θ̂, Σ̂, ĥ)) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) , for all ‖w‖1 ≤ κ, ‖uj‖ ≤ 1 and bj . Take w1 = κ, wj = 0 for all j > 1, u1 = u/ √ uT Σ̂u for some unit vector u and b1 = −uT θ/ √ uT Σ̂u, and we get fuT (θ̂−θ)√ uT Σ̂u (κ) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) , where fδ(t) = ∫ log ( 2 1 + e−tσ(∆s) ) h(s)ds+ ∫ log ( 2 1 + etσ(δ+s) ) ĥ(s)ds, where δ = u T (θ̂−θ)√ uT Σ̂u and ∆ = √ uTΣu√ uT Σ̂u . A similar argument to the proof of Theorem 3.2 gives κ 2 (∫ σ(∆s)h(s)ds− ∫ σ(δ + s)ĥ(s)ds ) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) + κ2 4 . Since ∫ σ(∆s)h(s)ds = 1 2 = ∫ σ(s)ĥ(s)ds, the above bound is equivalent to κ 2 (H(0)−H(δ)) ≤ 4κ + 2Cκ (√ p n + √ log(1/δ) n ) + κ2 4 , where H(δ) = ∫ σ(δ+ s)ĥ(s)ds. The above bound also holds for κ2 (H(δ)−H(0)) by a symmetric argument, and therefore the same bound holds for κ2 |H(δ)−H(0)|. Since H ′(0) = ∫ σ(s)(1− σ(s))ĥ(s)ds = 1, H(δ) is locally linear at δ = 0, which leads to a desired bound for δ = u T (θ̂−θ)√ uT Σ̂u . Finally, since uT Σ̂u ≤ M , we get the bound for uT (θ̂ − θ). The proof is complete by taking supreme over all unit vector u. Proof of Theorem A.1. We continue to use Pη to denote N(η, Ip). Define F (P, η) = sup D∈FHL (κ,τ,B) FD(P, η), with FD(P, η) = EP logD(X) + EN(η,Ip) log(1−D(X)) + log 4. Follow the same argument in the proof of Theorem 3.2, use Lemma D.3, and we have FD(Pθ, θ̂) ≤ Cκ ( + (2B)L−1 √ p log p n + √ log(1/δ) n ) , uniformly over D ∈ FHL (κ, τ,B) with probability at least 1 − δ. Choose w1 = κ and wj = 0 for all wj > 1. For any unit vector ũ ∈ Rp, take u1h = −u1(h+p) = ũh for h = 1, ..., p and b1 = −ũT θ. For h = 1, ..., p, set g1h(x) = max(xh, 0). For h = p + 1, ..., 2p, set g1h(x) = max(−xh−p, 0). It is obvious that such u and b satisfy ∑ h u 2 1h ≤ 2 and |b1| ≤ ‖θ‖ ≤ √ p‖θ‖∞ ≤ √ p log p. We need to show both the functions max(x, 0) and max(−x, 0) are elements of GHL−1(B). This can be proved by induction. It is obvious that max(xh, 0),max(−xh, 0) ∈ GH1 (B) for any h = 1, ..., p. Suppose we have max(xh, 0),max(−xh, 0) ∈ GHl (B) for any h = 1, ..., p. Then, max (max(xh, 0)−max(−xh, 0), 0) = max(xh, 0), max (max(−xh, 0)−max(xh, 0), 0) = max(−xh, 0). Therefore, max(xh, 0),max(−
1. What is the main contribution of the paper regarding Huber contamination model? 2. What are the different criteria for probability distribution function estimation obtained by the authors using f-divergence? 3. How does the minimax rate of the robust estimate compare to the optimal estimate for JS-divergence with a one-layer neural network discriminator? 4. What are the suggestions for improving the paper's clarity and readability? 5. Can you provide any insights into why TV-GAN performs better for some problems while JS-GAN is better for others? 6. Why does JS-GAN achieve lower errors in separable cases and show competitive performances for non-separable ones?
Review
Review The authors considered Huber contamination model. They use f-divergence and its variational lower bound to get a criterion for probability distribution function estimation. They showed that under different functions f in f-divergence they can get different criteria used in robust depth-based estimation of a mean and/or covariance matrix. For f, corresponding to the Total Variation divergence and discriminator being a logistic regression, they proved that the robust estimate can achieve the minimax rate, although there could be difficulties to optimize the criterion. Then the authors showed that for the JS-divergence with discriminator in the form of a one-layer neural network we can get robust and optimal estimate, while the criterion itself can be efficiently optimized. Comments - it could be good to define what Tau is right after formula (3). Analogously for the class of probability distributions $mathcal{Q}$ in (4), in for $\tilde{\mathcal{Q}}$ in (5) - page 3, line 12 from above: “and f’(t) = e^{t-1}.” In fact, here we should use $f^*(t)$ - page 3, proposition 2.1, subsection 1 of the proposition: $\tilde{\mathcal{Q}}$ instead of $\tilde{Q}$ should be used as a notation for a class of probability distributions - in (12) the authors unexpectedly introduced a new notation $D$. I guess they should specify right after formula (12) what is $D$ - theorem 3.1. If it is possible, it could be good at least to speculate on how $C, C’$ depend on $c$ in the displayed formula - axis labels in figure 2 are almost impossible to read. This somehow should be improved - in table 1 we clearly see that TV-GAN is better for some part of problems, and JS-GAN is better for another part of problems. Why? Any comments? At least intuition? - page 8, “ On the other hand, JS-GAN stably achieves the lowest error in separable cases and also shows competitive performances for non-separable ones.” Why? Any comments? Conclusion - in general, the paper is well written - it contains sufficient number of experiments to prove that the proposed approach is reasonable - the connection between GANs based on f-divergence and robust estimation seems to be important. Thus I’d like to proposed to accept this paper
ICLR
Title Interpretations are useful: penalizing explanations to align neural networks with prior knowledge Abstract For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods in order to increase the predictive accuracy of deep learning models. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by directly regularizing the provided explanations. Using explanations provided by contextual decomposition (CD) (Murdoch et al., 2018), we demonstrate the ability of our method to increase performance on an array of toy and real datasets. N/A For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods in order to increase the predictive accuracy of deep learning models. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by directly regularizing the provided explanations. Using explanations provided by contextual decomposition (CD) (Murdoch et al., 2018), we demonstrate the ability of our method to increase performance on an array of toy and real datasets. 1 INTRODUCTION In recent years, neural networks have demonstrated strong predictive performance across a wide variety of settings. However, in order to achieve that accuracy, they sometimes latch onto spurious correlations, leading to undesirable behavior as a result of dataset bias (Winkler et al., 2019), racial and ethnic stereotypes (Garg et al., 2018), or simply overfitting. While recent work into explaining neural network predictions (Murdoch et al., 2019; Doshi-Velez & Kim, 2017) has demonstrated an ability to uncover the relationships learned by a model, it is still unclear how to actually alter the model in order to remove incorrect, or undesirable, relationships. We introduce contextual decomposition explanation penalization (CDEP), a method which leverages existing explanation techniques for neural networks in order to prevent a model from learning unwanted relationships and ultimately improve predictive accuracy. Given particular importance scores, CDEP works by allowing the user to directly penalize importances of certain features, or interactions. This forces the neural network to not only produce the correct prediction, but also the correct explanation for that prediction. While we focus on contextual decomposition (CD) (Murdoch et al., 2018; Singh et al., 2018), which allows the penalization of both feature importances and interactions, CDEP can be readily adapted for existing interpretation techniques, as long as they are differentiable. Moreover, CDEP is a general technique, which can be applied to arbitrary neural network architectures, and is orders of magnitude faster and more memory efficient than recent gradient-based methods, allowing its use on meaningful datasets. In order to demonstrate the effectiveness of CDEP, we conducted experiments across a wide variety of tasks. In the prediction of skin cancer from images, CDEP improves the prediction of a classifier by teaching it to ignore spurious confounding variables present in the training data. In a colored MNIST task, CDEP allows the network to focus on a digit’s shape rather than its color (with no extra human annotation needed). Finally, a toy example using text classification shows how the penalization can help a network avoid a bias towards particular words, such as those involving gender. 2 BACKGROUND Explanation methods Many methods have been developed to help explain the learned relationships contained in a DNN. For local, or prediction-level, explanation, most prior work has focused on assigning importance to individual features, such as pixels in an image or words in a document. There are several methods that give feature-level importance for different architectures. They can be categorized as gradient-based (Springenberg et al., 2014; Sundararajan et al., 2017; Selvaraju et al., 2016; Baehrens et al., 2010; Rieger & Hansen, 2019), decomposition-based (Murdoch & Szlam, 2017; Shrikumar et al., 2016; Bach et al., 2015) and others (Dabkowski & Gal, 2017; Fong & Vedaldi, 2017; Ribeiro et al., 2016; Zintgraf et al., 2017), with many similarities among the methods (Ancona et al., 2018; Lundberg & Lee, 2017). However, many of these methods have thus far been poorly evaluated (Adebayo et al., 2018; Nie et al., 2018), casting doubt on their usefulness. Another line of work, which we build upon, has focused on uncovering interactions between features, in addition to feature importances, (Murdoch et al., 2018), and using those interactions to create a hierarchy of features displaying the model’s prediction process (Singh et al., 2018). Uses of explanation methods While much work has been put into developing methods for explaining DNNs, relatively little work has explored the potential to use these explanations to help build a better model. Some recent work proposes forcing models to attend to regions of the input which are known to be important (Burns et al., 2018; Mitsuhara et al., 2019), although it is important to note that attention is often not the same as explanation (Jain & Wallace, 2019). An alternative line of work proposes penalizing the gradients of a neural network to match human-provided binary annotations and shows the possibility to improve performance (Ross et al., 2017) and adversarial robustness (Ross & Doshi-Velez, 2018). Two recent papers extend these ideas by penalizing attributions for natural language models (Liu & Avci, 2019) and penalizing a modified gradient-based score to produce smooth attributions (Erion et al., 2019). Predating deep learning, (Zaidan et al., 2007) consider the use of “annotator rationales” in sentiment analysis to train support vector machines. This work on annotator rationales was recently extended to show improved explanations (not accuracy) in CNNs (Strout et al., 2019). Other ways to constrain DNNs While we focus on the use of explanations to constrain the relationships learned by neural networks, other approaches for constraining neural networks have also been proposed. A computationally intensive alternative is to augment the dataset in order to prevent the model from learning undesirable relationships, through domain knowledge (Bolukbasi et al., 2016), projecting out superficial statistics (Wang et al., 2019) or dramatically altering training images (Geirhos et al., 2018). However, these processes are often not feasible, either due to their computational cost or the difficulty of constructing such an augmented data set. Adversarial training has also been explored (Zhang & Zhu, 2019). These techniques are generally limited, as they are often tied to particular datasets, and do not provide a clear link between learning about a model’s learned relationships through explanations, and subsequently correcting them. 3 METHODS We now introduce CDEP, which penalizes the explanations of a neural network in order to align with prior knowledge about why a model should make a prediction. To do so, for each data point it penalizes the CD scores of features, or groups of features, which a user does not want the model to learn to be important. While we focus on CD scores, which allow the penalization of interactions between features in addition to features themselves, this approach readily generalizes to other interpretation techniques, so long as they are differentiable. 3.1 AUGMENTING THE LOSS FUNCTION Given a particular classification task, we want to teach a model to not only produce the correct prediction, but also to arrive at the prediction for the correct reasons. That is, we want the model to be right for the right reasons, where the right reasons are provided by the user and are datasetdependent. To accomplish this, CDEP modifies the objective function used to train a neural network, as displayed in Eq 1. In addition to the standard prediction loss L, which teaches the model to produce the correct predictions, CDEP adds an explanation error Lexpl, which teaches the model to produce the correct explanations for its predictions. In place of the prediction and labels fθ(X), y, used in the prediction error L, the explanation error Lexpl uses the explanations produced by an interpretation method explθ(X), along with targets provided by the user explX . As is common with penalization, the two losses are weighted by a hyperparameter λ ∈ R: θ̂ = argmin θ L (fθ(X), y)︸ ︷︷ ︸ Prediction error +λLexpl (explθ(X), explX)︸ ︷︷ ︸ Explanation error (1) The precise meanings of explX depend on the context. For example, in the skin cancer image classification task described in Section 4, many of the benign skin images contain band-aids, but none of the malignant images. To force the model to ignore the band-aids in making their prediction, in each image explθ(X) denotes the importance score of the band-aid and explX would be zero. These and more examples are further explored in Section 4. 3.2 CONTEXTUAL DECOMPOSITION (CD) In this work, we use the CD score as the explanation function. In contrast to other interpretation methods, which focus on feature importances, CD also captures interactions between features. CD was originally designed for LSTMs (Murdoch et al., 2018) and subsequently extended to convolutional neural networks and arbitrary DNNs (Singh et al., 2018). For a given DNN f(x), one can represent its output as a SoftMax operation applied to logits g(x). These logits, in turn, are the composition of L layers gi, such as convolutional operations or ReLU non-linearities. f(x) = SoftMax(g(x)) = SoftMax(gL(gL−1(...(g2(g1(x)))))) (2) Given a group of features {xj}j∈S , the CD algorithm, gCD(x), decomposes the logits g(x) into a sum of two terms, β(x) and γ(x). β(x) is the importance score of the feature group {xj}j∈S , and γ(x) captures contributions to g(x) not included in β(x). The decomposition is computed by iteratively applying decompositions gCDi (x) for each of the layers gi(x). gCD(x) = gCDL (g CD L−1(...(g CD 2 (g CD 1 (x)))))) = (β(x), γ(x)) (3) β(x) + γ(x) = g(x) (4) 3.3 CDEP OBJECTIVE FUNCTION We now substitute the above CD scores into the generic equation in Eq 1 to arrive at the method used in this paper. While we use CD for the explanation method explθ(X), other explanation methods could be readily substituted at this stage. In order to convert CD scores to probabilities, we apply a SoftMax operation to gCD(x), allowing for easier comparison with the user-provided labels explX . We collect from the user, for each input xi, a collection of feature groups xi,S , xi ∈ Rd, S ⊆ {1, ..., d}, along with explanation target values explxi,S , and use the ‖ · ‖1 loss for Lexpl. θ̂ = argmin θ ∑ i ∑ c − yi,c log fθ(xi)c︸ ︷︷ ︸ Classification error +λ ∑ i ∑ S ||β(xi,S)− explxi,S ||1︸ ︷︷ ︸ Explanation error (5) In the above, i indexes each individual example in the dataset, S indexes a subset of the features for which we penalize their explanations, and c sums over each class. Updating the model parameters in accordance with this formulation ensures that the model not only predicts the right output but also does so for the right (aligned with prior knowledge) reasons. 3.4 COMPUTATIONAL CONSIDERATIONS A similar idea to Eq 1 has been proposed in previous/concurrent work, where the choice of explanation method uses a gradient-based attribution method (Ross et al., 2017; Erion et al., 2019). However, using such methods leads to three main complications which are solved by our approach. The first complication is the optimization process. When optimizing over attributions from a gradientbased attribution method via gradient descent, the optimizer requires the gradient of the gradient, thus requiring that all network components be twice differentiable. This process is computationally expensive and indeed optimizing it exactly involves optimizing over a differential equation. In contrast, CD attributions are calculated along with the forward pass of the network, and as a result can be optimized plainly with back-propagation using the standard single forward-pass and backward-pass per batch. A second complication solved by the use of CD in Eq 5 is the ability to quickly finetune a pre-trained network. In many applications, particularly in transfer learning, it is common to finetune only the last few layers of a pre-trained neural network. Using CD, one can freeze early layers of the network and then finetune the last few layers of the network quickly as the activations and gradients of the frozen layers are not necessary. Third, penalizing gradient-based methods incurs a very large memory usage. Using gradient-based methods, training requires the storage of activations and gradients for all layers of the network as well as the gradient of input (which can be omitted in normal training). Even for the simplest version, based on saliency, this more than doubles the required memory for a given batch and network size. More advanced methods proved to be completely infeasible to apply to a real-life dataset used, since the memory requirements were too high. By contrast, penalizing CD only requires a small constant amount of memory more than standard training. 4 RESULTS The results here demonstrate the efficacy of CDEP on a variety of datasets using diverse explanation types. Sec 4.1 shows results on ignoring spurious patches in the ISIC skin cancer dataset (Codella et al., 2019), Sec 4.2 details experiments on converting a DNN’s preference for color to a preference for shape on a variant of the MNIST dataset (LeCun, 1998), and Sec 4.3 shows experiments on text data from the Stanford Sentiment Treebank (SST) (Socher et al., 2013).1 4.1 IGNORING SPURIOUS SIGNALS IN SKIN CANCER DIAGNOSIS In recent years, deep learning has achieved impressive results in diagnosing skin cancer, with predictive accuracy sometimes comparable to human doctors (Esteva et al., 2017). However, the datasets used to train these models often include spurious features which make it possible to attain high test accuracy without learning the underlying phenomena (Winkler et al., 2019). In particular, a popular dataset from ISIC (International Skin Imaging Collaboration) has colorful patches present in approximately 50% of the non-cancerous images but not in the cancerous images (Codella et al., 2019). An unpenalized DNN learns to look for these patches as an indicator for predicting that an image is benign. We use CDEP to remedy this problem by penalizing the DNN placing importance on the patches during training. 1All models were trained in PyTorch. The task in this section is to classify whether an image of a skin lesion contains (1) benign melanoma or (2) malignant melanoma. The ISIC dataset consists of 21,654 images (19,372 benign), each diagnosed by histopathology or a consensus of experts. For classification, we use a VGG16 architecture (Simonyan & Zisserman, 2014) pre-trained on the ImageNet Classification task 2 and freeze the weights of early layers so that only the fully connected layers are trained. In order to use CDEP, the spurious patches are identified via a s imple image segmentation algorithm using a color threshold (see Sec S4). Table 1 shows results comparing the performance of a model trained with and without CDEP. We report results on two variants of the test set. The first, which we refer to as “no patches” only contains images of the test set that do not include patches. The second also includes images with those patches. Training with CDEP improves the AUC and F1-score for both test sets. In the first row of Table 1, the model is trained using only the data without the spurious patches, and the second row shows the model trained on the full dataset. The network trained using CDEP achieves the best AUC, surpassing both unpenalized versions. Applying our method increases the ROC AUC as well as the best F1 score. We also compared our method against the method introduced in 2017 by Ross et al. (RRR). For this, we restricted the batch size to 16 (and consequently use a learning rate of 10−5) due to memory constraints. Using RRR did not improve on the base AUC, implying that penalizing gradients is not helpful in penalizing higher-order features.3 Visualizing explanations Fig. 3 visualize GradCAM heatmaps (Ozbulak, 2019; Selvaraju et al., 2017) for an unpenalized DNN and a DNN trained with CDEP to ignore spurious patches. As expected, after penalizing with CDEP, the DNN attributes less importance to the spurious patches, regardless of their position in the image. More examples, also for cancerous images, are shown in Sec S5. 2Pre-trained model retrieved from torchvision. 3We were not able to compare against the method recently proposed in Erion et al. (2019) due to the prohibitively slow training and large memory requirements. 4.2 COMBATING INDUCTIVE BIAS ON VARIANTS OF THE MNIST DATASET In this section, we investigate whether we can alter which features a DNN uses to perform digit classification, using variants of the MNIST dataset (LeCun, 1998) and a standard CNN architecture for this dataset retrieved from PyTorch 4. 4.2.1 COLORMNIST Similar to a previous study (Li & Vasconcelos, 2019), we transform the MNIST dataset to include three color channels and assign each class a distinct color, as shown in Fig. 4. An unpenalized DNN trained on this biased data will completely misclassify a test set with inverted colors, dropping to 0% accuracy (see Section 4.2.1), suggesting that it learns to classify using the colors of the digits rather than their shape. Here, we want to see if we can alter the DNN to focus on the shape of the digits rather than their color. Interestingly, this can be enforced by minimizing the contribution of pixels in isolation while maximizing the importance of groups of pixels (which can represent shapes). To do this, we add penalize the CD contribution of sampled single pixel values, following Eq 5. By minimizing the contribution of single pixels we effectively encourage the network to focus more on groups of pixels, which can represent shape. Section 4.2.1 shows that CDEP can partially change the network’s focus on solely color to also focus on digit shape. We compare CDEP to two previously introduced explanation penalization techniques: penalization of the squared gradients (RRR) (Ross et al., 2017) and Expected Gradients (EG) (Erion et al., 2019). For EG we penalize variance between attributions of the RGB channels 4Model and training code from https://github.com/pytorch/examples/blob/master/mnist/main.py. (as recommended by the authors of EG in personal correspondence). None of the baselines are able to improve the test accuracy of the model on this task above the random baseline, while CDEP is able to significantly improve this accuracy to 31.0%. We show the increase of predictive accuracy with increasing penalization in Fig. S5. 4.2.2 DECOYMNIST For further comparison with previous work, we evaluate CDEP on an existing task: DecoyMNIST (Erion et al., 2019). DecoyMNIST adds a class-indicative gray patch to a random corner of the image. This task is relatively simple, as the spurious features are not entangled with any other feature and are always at the same location (the corners). Table 3 shows that all methods perform roughly equally, recovering the base accuracy. Results are reported using the best penalization parameter λ, chosen via cross-validation on the test accuracy. We provide details on the computation time, and memory usage in Table S1, showing that CDEP is similar to existing approaches. However, when freezing early layers of a network and finetuning, CDEP very quickly becomes more efficient than other methods. 4.3 FIXING BIAS IN TEXT DATA To demonstrate CDEP’s effectiveness on text, we use the Stanford Sentiment Treebank (SST) dataset (Socher et al., 2013), an NLP benchmark dataset consisting of movie reviews with a binary sentiment (positive/negative). We inject spurious signals into the training set and train a standard LSTM 5 to classify sentiment from the review. 5Model and training code from https://github.com/clairett/pytorch-sentiment-classification. We create three variants of the SST dataset, each with different spurious signals which we aim to ignore (examples in Sec S1). In the first variant, we add indicator words for each class (positive: ‘text’, negative: ‘video’) at a random location in each sentence. An unpenalized DNN will focus only on those words, dropping to nearly random performance on the unbiased test set. In the second variant, we use two semantically similar words (‘the’, ‘a’) to indicate the class by using one word only in the positive and one only in the negative class. In the third case, we use ‘he’ and ‘she’ to indicate class (example in Fig 5). Since these gendered words are only present in a small proportion of the training dataset (∼ 2%), for this variant, we report accuracy only on the sentences in the test set that do include the pronouns (performance on the test dataset not including the pronouns remains unchanged). Table 4 shows the test accuracy for all datasets with and without CDEP. In all scenarios, CDEP is successfully able to improve the test accuracy by ignoring the injected spurious signals. 5 CONCLUSION In this work we introduce a novel method to penalize neural networks to align with prior knowledge. Compared to previous work, CDEP is the first of its kind that can penalize complex features and feature interactions. Furthermore, CDEP is more computationally efficient than previous work and does not rely on backpropagation, enabling its use with more complex neural networks. We show that CDEP can be used to remove bias and improve predictive accuracy on a variety of toy and real data. The experiments here demonstrate a variety of ways to use CDEP to improve models both on real and toy datasets. CDEP is quite versatile and can be used in many more areas to incorporate the structure of domain knowledge (e.g. biology or physics). Of course, the effectiveness of CDEP depends upon the quality of the prior knowledge used to determine the explanation targets. Future work includes extending CDEP to more complex penalties, incorporating more fine-grained explanations and interactions. We hope the work here will help push the field towards a more rigorous way to use interpretability methods, a point which will become increasingly important as interpretable machine learning develops as a field (Doshi-Velez & Kim, 2017; Murdoch et al., 2019). S1 ADDITIONAL DETAILS ABOUT SST TASK Section 4.3 shows the results for CDEP on biased variants of the SST dataset. Here we show examples of the biased sentences (for task 2 and 3 we only show sentences where the bias was present) in Figs. S1 to S3. For the first task, we insert two randomly chosen words in 100% of the sentences in the positive and negative class respectively. We choose two words (“text” for the positive class and “video” for the negative class) that were not otherwise present in the data set but had a representation in Word2Vec. Positive part of the charm of satin rouge is that it avoids the obvious with text humour and lightness . text a screenplay more ingeniously constructed than ‘memento’ good fun text, good action, good acting, good dialogue, good pace, good cinematography . dramas like text this make it human . Negative ... begins with promise, but runs aground after being video snared in its own tangled plot . the video movie is well done, but slow . this orange has some juice , but it 's video far from fresh-squeezed . as it is, video it 's too long and unfocused . Figure S1: Example sentences from the variant 1 of the biased SST dataset with decoy variables in each sentence. For the second task, we choose to replace two common words (”the” and ”a”) in sentences where they appear (27% of the dataset). We replace the words such that one word only appears in the positive class and the other world only in the negative class. By choosing words that are semantically almost replaceable, we ensured that the normal sentence structure would not be broken such as with the first task. Positive comes off as a touching , transcendent love story . is most remarkable not because of its epic scope , but because of a startling intimacy couldn't be better as a cruel but weirdly likable wasp matron uses humor and a heartfelt conviction to tell that story about discovering your destination in life Negative to creep the living hell out of you holds its goodwill close , but is relatively slow to come to the point it 's not the great monster movie . consider the dvd rental instead Figure S2: Example sentences from the variant 2 of the SST dataset with artificially induced bias on articles (”the”, ”a”). Bias was only induced on the sentences where those articles were used (27% of the dataset). For the third task we repeat the same procedure with two words (“he” and “she”) that appeared in only 2% of the dataset. This helps evaluate whether CDEP works even if the spurious signal appears only in a small section of the data set. Positive pacino is the best she's been in years and keener is marvelous she showcases davies as a young woman of great charm , generosity and diplomacy shows she 's back in form , with an astoundingly rich film . proves once again that she's the best brush in the business Negative green ruins every single scene he's in, and the film, while it 's not completely wreaked, is seriously compromised by that i'm sorry to say that this should seal the deal - arnold is not, nor will he be, back . this is sandler running on empty , repeating what he 's already done way too often . so howard appears to have had free rein to be as pretentious as he wanted Figure S3: Example sentences from the variant 3 of the SST dataset with artificially induced bias on articles (”he”, ”she”). Bias was only induced on the sentences where those articles were used (2% of the dataset). S2 NETWORK ARCHITECTURES AND TRAINING S2.1 NETWORK ARCHITECTURES For the ISIC skin cancer task we used a pretrained VGG16 network retrieved from the PyTorch model zoo. We use SGD as the optimizer with a learning rate of 0.01 and momentum of 0.9. Preliminary experiments with Adam as the optimizer yielded poorer predictive performance. or both MNIST tasks, we use a standard convolutional network with two convolutional channels followed by max pooling respectively and two fully connected layers: Conv(20,5,5) - MaxPool() - Conv(50,5,5) - MaxPool - FC(256) - FC(10). The models were trained with Adam, using a weight decay of 0.001. Penalizing explanations adds an additional hyperparameter, λ to the training. λ can either be set in proportion to the normal training loss or at a fixed rate. In this paper we did the latter. We expect that exploring the former could lead to a more stable training process. For all tasks λ was tested across a wide range between [10−1, 104]. The LSTM for the SST experiments consisted of two LSTM layers with 128 hidden units followed by a fully connected layer. S2.2 COLORMNIST For fixing the bias in the ColorMNIST task, we sample pixels from the distribution of non-zero pixels over the whole training set, as shown in Fig. S4 Figure S4: Sampling distribution for ColorMNIST Figure S5: Results on ColorMNIST (Test Accuracy). All averaged over thirty runs. CDEP is the only method that captures and removes color bias. For Expected Gradients we show results when sampling pixels as well as when penalizing the variance between attributions for the RGB channels (as recommended by the authors of EG) in Fig. S5. Neither of them go above random accuracy, only achieving random accuracy when they are regularized to a constant prediction. S3 RUNTIME AND MEMORY REQUIREMENTS OF DIFFERENT ALGORITHMS This section provides further details on runtime and memory requirements reported in Table S1. We compared the runtime and memory requirements of the available regularization schemes when implemented in Pytorch. Memory usage and runtime were tested on the DecoyMNIST task with a batch size of 64. It is expected that the exact ratios will change depending on the complexity of the used network and batch size (since constant memory usage becomes disproportionally smaller with increasing batch size). The memory usage was read by recording the memory allocated by PyTorch. Since Expected Gradients and RRR require two forward and backward passes, we only record the maximum memory usage. We ran experiments on a single Titan X. Table S1: Memory usage and run time were recorded for the DecoyMNIST task. Unpenalized CDEP RRR Expected Gradients Run time/epoch (seconds) 4.7 17.1 11.2 17.8 Maximum GPU RAM usage (GB) 0.027 0.068 0.046 0.046 S4 IMAGE SEGMENTATION FOR ISIC SKIN CANCER To obtain the binary maps of the patches for the skin cancer task, we first segment the images using SLIC, a common image-segmentation algorithm (Achanta et al., 2012). Since the patches look quite distinct from the rest of the image, the patches are usually their own segment. Subsequently we take the mean RGB and HSV values for all segments and filtered for segments which the mean was substantially different from the typical caucasian skin tone. Since different images were different from the typical skin color in different attributes, we filtered for those images recursively. As an example, in the image shown in Fig. S6, the patch has a much higher saturation than the rest of the image. For each image we exported a map as seen in Fig. S6. Figure S6: Sample segmentation for the ISIC task. S5 ADDITIONAL HEATMAP EXAMPLES FOR ISIC We show additional examples from the test set of the skin cancer task in Figs. S7 and S8. We see that the importance maps for the unregularized and regularized network are very similar for cancerous images and non-cancerous images without patch. The patches are ignored by the network regularized with CDEP. Image Vanilla CDEP Figure S7: Heatmaps for benign samples from ISIC Image Vanilla CDEP Figure S8: Heatmaps for cancerous samples from ISIC A different spurious correlation that we noticed was that proportionally more images showing skin cancer will have a ruler next to the lesion. This is the case because doctors often want to show a reference for size if they diagnosed that the lesion is cancerous. Even though the spurious correlation is less pronounced (in a very rough cursory count, 13% of the cancerous and 5% of the benign images contain some sort of measure), the networks learnt to recognize and exploit this spurious correlation. This further highlights the need for CDEP, especially in medical settings. Image Vanilla CDEP Both networks learnt the non-penalized spurious correlation: Ruler -> Cancer Figure S9: Both networks learnt that proportionally more images with malignant lesions feature a ruler next to the lesion. To make comparison easier, we visualize the heatmap by multiplying it with the image. Visible regions are important for classification.
1. What is the focus of the paper regarding explanation methods in deep learning models? 2. What are the strengths of the proposed method, particularly in its ability to utilize human-provided annotations? 3. Do you have any concerns or disagreements regarding the premise of the paper? 4. How does the paper relate to prior works on annotator rationales and human-provided explanations? 5. What are your thoughts on the experimental results presented in the paper? 6. Are there any areas where further tuning or improvement could be explored in the proposed method?
Review
Review This paper presents a method intended to allow practitioners to *use* explanations provided by various methods. Concretely, the authors propose contextual decomposition explanation penalization (CDEP), which aims to use explanation methods to allow users to dissuade the model from learning unwanted correlations. The proposed method is somewhat similar to prior work by Ross et al., in that the idea is to include an explicit term in the objective that encourages the model to align with prior knowledge. In particular, the authors assume supervision --- effectively labeled features, from what I gather --- provided by users and define an objective that penalizes divergence from this. The object that is penalized is $\Beta(x_i, s)$, which is the importance score for feature s in instance $i$; for this they use a decontextualized representation of the feature (this is the contextual decomposition aspect). Although the authors highlight that any differentiable scoring function could be used, I think the use of this decontextualized variant as is done here is nice because it avoids issues with feature interactions in the hidden space that might result in misleading 'attribution' w.r.t. the original inputs. The main advantage of this effort compared to work that directly penalizes the gradients (as in Ross et al.) is that the method does not rely on second gradients (gradients of gradients), which is computationally problematic. Overall, this is a nice contribution that offers a new mechanism for exploiting human provided annotations. I do have some specific comments below. - I am not sure I agree with the premise as stated here. Namely, the authors write "For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective" -- I would argue that an explanation may be useful in and of itself by highlighting how a model came to a prediction. I am not convinced that it need necessarily lead to, e.g., improving model performance. I think the authors are perhaps arguing that explanations might be used to interactively improve the underlying model, which is an interesting and sensible direction. - This work, which aims to harness user supervision on explanations to improve model performance, seems closely related to work on "annotator rationales" (Zaidan 2007 being the first work on this), but no mention is made of this. "Do Human Rationales Improve Machine Explanations?" by Strout et al. (2019) also seems relevant as a more recent instance in this line of work. I do not think such approaches are necessarily directly comparable, but some discussion of how this effort is situatied with respect to this line of work would be appreciated. - The experiment with MNIST colors was neat. - The authors compare their approach to Ross and colleagues in Table 1 but see quite poor results for the latter approach. Is this a result of the smaller batch size / learning rate adjustment? It seems that some tuning of this approach is warranted. - Figure 3 is nice but not terribly surprising: The image shows that the objective indeed works as expected; but if this were not the case, then it would suggest basically a failure of optimization (i.e., the objective dictates that the image should look like this *by construction*). Still, it's a good sanity check.
ICLR
Title Interpretations are useful: penalizing explanations to align neural networks with prior knowledge Abstract For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods in order to increase the predictive accuracy of deep learning models. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by directly regularizing the provided explanations. Using explanations provided by contextual decomposition (CD) (Murdoch et al., 2018), we demonstrate the ability of our method to increase performance on an array of toy and real datasets. N/A For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods in order to increase the predictive accuracy of deep learning models. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by directly regularizing the provided explanations. Using explanations provided by contextual decomposition (CD) (Murdoch et al., 2018), we demonstrate the ability of our method to increase performance on an array of toy and real datasets. 1 INTRODUCTION In recent years, neural networks have demonstrated strong predictive performance across a wide variety of settings. However, in order to achieve that accuracy, they sometimes latch onto spurious correlations, leading to undesirable behavior as a result of dataset bias (Winkler et al., 2019), racial and ethnic stereotypes (Garg et al., 2018), or simply overfitting. While recent work into explaining neural network predictions (Murdoch et al., 2019; Doshi-Velez & Kim, 2017) has demonstrated an ability to uncover the relationships learned by a model, it is still unclear how to actually alter the model in order to remove incorrect, or undesirable, relationships. We introduce contextual decomposition explanation penalization (CDEP), a method which leverages existing explanation techniques for neural networks in order to prevent a model from learning unwanted relationships and ultimately improve predictive accuracy. Given particular importance scores, CDEP works by allowing the user to directly penalize importances of certain features, or interactions. This forces the neural network to not only produce the correct prediction, but also the correct explanation for that prediction. While we focus on contextual decomposition (CD) (Murdoch et al., 2018; Singh et al., 2018), which allows the penalization of both feature importances and interactions, CDEP can be readily adapted for existing interpretation techniques, as long as they are differentiable. Moreover, CDEP is a general technique, which can be applied to arbitrary neural network architectures, and is orders of magnitude faster and more memory efficient than recent gradient-based methods, allowing its use on meaningful datasets. In order to demonstrate the effectiveness of CDEP, we conducted experiments across a wide variety of tasks. In the prediction of skin cancer from images, CDEP improves the prediction of a classifier by teaching it to ignore spurious confounding variables present in the training data. In a colored MNIST task, CDEP allows the network to focus on a digit’s shape rather than its color (with no extra human annotation needed). Finally, a toy example using text classification shows how the penalization can help a network avoid a bias towards particular words, such as those involving gender. 2 BACKGROUND Explanation methods Many methods have been developed to help explain the learned relationships contained in a DNN. For local, or prediction-level, explanation, most prior work has focused on assigning importance to individual features, such as pixels in an image or words in a document. There are several methods that give feature-level importance for different architectures. They can be categorized as gradient-based (Springenberg et al., 2014; Sundararajan et al., 2017; Selvaraju et al., 2016; Baehrens et al., 2010; Rieger & Hansen, 2019), decomposition-based (Murdoch & Szlam, 2017; Shrikumar et al., 2016; Bach et al., 2015) and others (Dabkowski & Gal, 2017; Fong & Vedaldi, 2017; Ribeiro et al., 2016; Zintgraf et al., 2017), with many similarities among the methods (Ancona et al., 2018; Lundberg & Lee, 2017). However, many of these methods have thus far been poorly evaluated (Adebayo et al., 2018; Nie et al., 2018), casting doubt on their usefulness. Another line of work, which we build upon, has focused on uncovering interactions between features, in addition to feature importances, (Murdoch et al., 2018), and using those interactions to create a hierarchy of features displaying the model’s prediction process (Singh et al., 2018). Uses of explanation methods While much work has been put into developing methods for explaining DNNs, relatively little work has explored the potential to use these explanations to help build a better model. Some recent work proposes forcing models to attend to regions of the input which are known to be important (Burns et al., 2018; Mitsuhara et al., 2019), although it is important to note that attention is often not the same as explanation (Jain & Wallace, 2019). An alternative line of work proposes penalizing the gradients of a neural network to match human-provided binary annotations and shows the possibility to improve performance (Ross et al., 2017) and adversarial robustness (Ross & Doshi-Velez, 2018). Two recent papers extend these ideas by penalizing attributions for natural language models (Liu & Avci, 2019) and penalizing a modified gradient-based score to produce smooth attributions (Erion et al., 2019). Predating deep learning, (Zaidan et al., 2007) consider the use of “annotator rationales” in sentiment analysis to train support vector machines. This work on annotator rationales was recently extended to show improved explanations (not accuracy) in CNNs (Strout et al., 2019). Other ways to constrain DNNs While we focus on the use of explanations to constrain the relationships learned by neural networks, other approaches for constraining neural networks have also been proposed. A computationally intensive alternative is to augment the dataset in order to prevent the model from learning undesirable relationships, through domain knowledge (Bolukbasi et al., 2016), projecting out superficial statistics (Wang et al., 2019) or dramatically altering training images (Geirhos et al., 2018). However, these processes are often not feasible, either due to their computational cost or the difficulty of constructing such an augmented data set. Adversarial training has also been explored (Zhang & Zhu, 2019). These techniques are generally limited, as they are often tied to particular datasets, and do not provide a clear link between learning about a model’s learned relationships through explanations, and subsequently correcting them. 3 METHODS We now introduce CDEP, which penalizes the explanations of a neural network in order to align with prior knowledge about why a model should make a prediction. To do so, for each data point it penalizes the CD scores of features, or groups of features, which a user does not want the model to learn to be important. While we focus on CD scores, which allow the penalization of interactions between features in addition to features themselves, this approach readily generalizes to other interpretation techniques, so long as they are differentiable. 3.1 AUGMENTING THE LOSS FUNCTION Given a particular classification task, we want to teach a model to not only produce the correct prediction, but also to arrive at the prediction for the correct reasons. That is, we want the model to be right for the right reasons, where the right reasons are provided by the user and are datasetdependent. To accomplish this, CDEP modifies the objective function used to train a neural network, as displayed in Eq 1. In addition to the standard prediction loss L, which teaches the model to produce the correct predictions, CDEP adds an explanation error Lexpl, which teaches the model to produce the correct explanations for its predictions. In place of the prediction and labels fθ(X), y, used in the prediction error L, the explanation error Lexpl uses the explanations produced by an interpretation method explθ(X), along with targets provided by the user explX . As is common with penalization, the two losses are weighted by a hyperparameter λ ∈ R: θ̂ = argmin θ L (fθ(X), y)︸ ︷︷ ︸ Prediction error +λLexpl (explθ(X), explX)︸ ︷︷ ︸ Explanation error (1) The precise meanings of explX depend on the context. For example, in the skin cancer image classification task described in Section 4, many of the benign skin images contain band-aids, but none of the malignant images. To force the model to ignore the band-aids in making their prediction, in each image explθ(X) denotes the importance score of the band-aid and explX would be zero. These and more examples are further explored in Section 4. 3.2 CONTEXTUAL DECOMPOSITION (CD) In this work, we use the CD score as the explanation function. In contrast to other interpretation methods, which focus on feature importances, CD also captures interactions between features. CD was originally designed for LSTMs (Murdoch et al., 2018) and subsequently extended to convolutional neural networks and arbitrary DNNs (Singh et al., 2018). For a given DNN f(x), one can represent its output as a SoftMax operation applied to logits g(x). These logits, in turn, are the composition of L layers gi, such as convolutional operations or ReLU non-linearities. f(x) = SoftMax(g(x)) = SoftMax(gL(gL−1(...(g2(g1(x)))))) (2) Given a group of features {xj}j∈S , the CD algorithm, gCD(x), decomposes the logits g(x) into a sum of two terms, β(x) and γ(x). β(x) is the importance score of the feature group {xj}j∈S , and γ(x) captures contributions to g(x) not included in β(x). The decomposition is computed by iteratively applying decompositions gCDi (x) for each of the layers gi(x). gCD(x) = gCDL (g CD L−1(...(g CD 2 (g CD 1 (x)))))) = (β(x), γ(x)) (3) β(x) + γ(x) = g(x) (4) 3.3 CDEP OBJECTIVE FUNCTION We now substitute the above CD scores into the generic equation in Eq 1 to arrive at the method used in this paper. While we use CD for the explanation method explθ(X), other explanation methods could be readily substituted at this stage. In order to convert CD scores to probabilities, we apply a SoftMax operation to gCD(x), allowing for easier comparison with the user-provided labels explX . We collect from the user, for each input xi, a collection of feature groups xi,S , xi ∈ Rd, S ⊆ {1, ..., d}, along with explanation target values explxi,S , and use the ‖ · ‖1 loss for Lexpl. θ̂ = argmin θ ∑ i ∑ c − yi,c log fθ(xi)c︸ ︷︷ ︸ Classification error +λ ∑ i ∑ S ||β(xi,S)− explxi,S ||1︸ ︷︷ ︸ Explanation error (5) In the above, i indexes each individual example in the dataset, S indexes a subset of the features for which we penalize their explanations, and c sums over each class. Updating the model parameters in accordance with this formulation ensures that the model not only predicts the right output but also does so for the right (aligned with prior knowledge) reasons. 3.4 COMPUTATIONAL CONSIDERATIONS A similar idea to Eq 1 has been proposed in previous/concurrent work, where the choice of explanation method uses a gradient-based attribution method (Ross et al., 2017; Erion et al., 2019). However, using such methods leads to three main complications which are solved by our approach. The first complication is the optimization process. When optimizing over attributions from a gradientbased attribution method via gradient descent, the optimizer requires the gradient of the gradient, thus requiring that all network components be twice differentiable. This process is computationally expensive and indeed optimizing it exactly involves optimizing over a differential equation. In contrast, CD attributions are calculated along with the forward pass of the network, and as a result can be optimized plainly with back-propagation using the standard single forward-pass and backward-pass per batch. A second complication solved by the use of CD in Eq 5 is the ability to quickly finetune a pre-trained network. In many applications, particularly in transfer learning, it is common to finetune only the last few layers of a pre-trained neural network. Using CD, one can freeze early layers of the network and then finetune the last few layers of the network quickly as the activations and gradients of the frozen layers are not necessary. Third, penalizing gradient-based methods incurs a very large memory usage. Using gradient-based methods, training requires the storage of activations and gradients for all layers of the network as well as the gradient of input (which can be omitted in normal training). Even for the simplest version, based on saliency, this more than doubles the required memory for a given batch and network size. More advanced methods proved to be completely infeasible to apply to a real-life dataset used, since the memory requirements were too high. By contrast, penalizing CD only requires a small constant amount of memory more than standard training. 4 RESULTS The results here demonstrate the efficacy of CDEP on a variety of datasets using diverse explanation types. Sec 4.1 shows results on ignoring spurious patches in the ISIC skin cancer dataset (Codella et al., 2019), Sec 4.2 details experiments on converting a DNN’s preference for color to a preference for shape on a variant of the MNIST dataset (LeCun, 1998), and Sec 4.3 shows experiments on text data from the Stanford Sentiment Treebank (SST) (Socher et al., 2013).1 4.1 IGNORING SPURIOUS SIGNALS IN SKIN CANCER DIAGNOSIS In recent years, deep learning has achieved impressive results in diagnosing skin cancer, with predictive accuracy sometimes comparable to human doctors (Esteva et al., 2017). However, the datasets used to train these models often include spurious features which make it possible to attain high test accuracy without learning the underlying phenomena (Winkler et al., 2019). In particular, a popular dataset from ISIC (International Skin Imaging Collaboration) has colorful patches present in approximately 50% of the non-cancerous images but not in the cancerous images (Codella et al., 2019). An unpenalized DNN learns to look for these patches as an indicator for predicting that an image is benign. We use CDEP to remedy this problem by penalizing the DNN placing importance on the patches during training. 1All models were trained in PyTorch. The task in this section is to classify whether an image of a skin lesion contains (1) benign melanoma or (2) malignant melanoma. The ISIC dataset consists of 21,654 images (19,372 benign), each diagnosed by histopathology or a consensus of experts. For classification, we use a VGG16 architecture (Simonyan & Zisserman, 2014) pre-trained on the ImageNet Classification task 2 and freeze the weights of early layers so that only the fully connected layers are trained. In order to use CDEP, the spurious patches are identified via a s imple image segmentation algorithm using a color threshold (see Sec S4). Table 1 shows results comparing the performance of a model trained with and without CDEP. We report results on two variants of the test set. The first, which we refer to as “no patches” only contains images of the test set that do not include patches. The second also includes images with those patches. Training with CDEP improves the AUC and F1-score for both test sets. In the first row of Table 1, the model is trained using only the data without the spurious patches, and the second row shows the model trained on the full dataset. The network trained using CDEP achieves the best AUC, surpassing both unpenalized versions. Applying our method increases the ROC AUC as well as the best F1 score. We also compared our method against the method introduced in 2017 by Ross et al. (RRR). For this, we restricted the batch size to 16 (and consequently use a learning rate of 10−5) due to memory constraints. Using RRR did not improve on the base AUC, implying that penalizing gradients is not helpful in penalizing higher-order features.3 Visualizing explanations Fig. 3 visualize GradCAM heatmaps (Ozbulak, 2019; Selvaraju et al., 2017) for an unpenalized DNN and a DNN trained with CDEP to ignore spurious patches. As expected, after penalizing with CDEP, the DNN attributes less importance to the spurious patches, regardless of their position in the image. More examples, also for cancerous images, are shown in Sec S5. 2Pre-trained model retrieved from torchvision. 3We were not able to compare against the method recently proposed in Erion et al. (2019) due to the prohibitively slow training and large memory requirements. 4.2 COMBATING INDUCTIVE BIAS ON VARIANTS OF THE MNIST DATASET In this section, we investigate whether we can alter which features a DNN uses to perform digit classification, using variants of the MNIST dataset (LeCun, 1998) and a standard CNN architecture for this dataset retrieved from PyTorch 4. 4.2.1 COLORMNIST Similar to a previous study (Li & Vasconcelos, 2019), we transform the MNIST dataset to include three color channels and assign each class a distinct color, as shown in Fig. 4. An unpenalized DNN trained on this biased data will completely misclassify a test set with inverted colors, dropping to 0% accuracy (see Section 4.2.1), suggesting that it learns to classify using the colors of the digits rather than their shape. Here, we want to see if we can alter the DNN to focus on the shape of the digits rather than their color. Interestingly, this can be enforced by minimizing the contribution of pixels in isolation while maximizing the importance of groups of pixels (which can represent shapes). To do this, we add penalize the CD contribution of sampled single pixel values, following Eq 5. By minimizing the contribution of single pixels we effectively encourage the network to focus more on groups of pixels, which can represent shape. Section 4.2.1 shows that CDEP can partially change the network’s focus on solely color to also focus on digit shape. We compare CDEP to two previously introduced explanation penalization techniques: penalization of the squared gradients (RRR) (Ross et al., 2017) and Expected Gradients (EG) (Erion et al., 2019). For EG we penalize variance between attributions of the RGB channels 4Model and training code from https://github.com/pytorch/examples/blob/master/mnist/main.py. (as recommended by the authors of EG in personal correspondence). None of the baselines are able to improve the test accuracy of the model on this task above the random baseline, while CDEP is able to significantly improve this accuracy to 31.0%. We show the increase of predictive accuracy with increasing penalization in Fig. S5. 4.2.2 DECOYMNIST For further comparison with previous work, we evaluate CDEP on an existing task: DecoyMNIST (Erion et al., 2019). DecoyMNIST adds a class-indicative gray patch to a random corner of the image. This task is relatively simple, as the spurious features are not entangled with any other feature and are always at the same location (the corners). Table 3 shows that all methods perform roughly equally, recovering the base accuracy. Results are reported using the best penalization parameter λ, chosen via cross-validation on the test accuracy. We provide details on the computation time, and memory usage in Table S1, showing that CDEP is similar to existing approaches. However, when freezing early layers of a network and finetuning, CDEP very quickly becomes more efficient than other methods. 4.3 FIXING BIAS IN TEXT DATA To demonstrate CDEP’s effectiveness on text, we use the Stanford Sentiment Treebank (SST) dataset (Socher et al., 2013), an NLP benchmark dataset consisting of movie reviews with a binary sentiment (positive/negative). We inject spurious signals into the training set and train a standard LSTM 5 to classify sentiment from the review. 5Model and training code from https://github.com/clairett/pytorch-sentiment-classification. We create three variants of the SST dataset, each with different spurious signals which we aim to ignore (examples in Sec S1). In the first variant, we add indicator words for each class (positive: ‘text’, negative: ‘video’) at a random location in each sentence. An unpenalized DNN will focus only on those words, dropping to nearly random performance on the unbiased test set. In the second variant, we use two semantically similar words (‘the’, ‘a’) to indicate the class by using one word only in the positive and one only in the negative class. In the third case, we use ‘he’ and ‘she’ to indicate class (example in Fig 5). Since these gendered words are only present in a small proportion of the training dataset (∼ 2%), for this variant, we report accuracy only on the sentences in the test set that do include the pronouns (performance on the test dataset not including the pronouns remains unchanged). Table 4 shows the test accuracy for all datasets with and without CDEP. In all scenarios, CDEP is successfully able to improve the test accuracy by ignoring the injected spurious signals. 5 CONCLUSION In this work we introduce a novel method to penalize neural networks to align with prior knowledge. Compared to previous work, CDEP is the first of its kind that can penalize complex features and feature interactions. Furthermore, CDEP is more computationally efficient than previous work and does not rely on backpropagation, enabling its use with more complex neural networks. We show that CDEP can be used to remove bias and improve predictive accuracy on a variety of toy and real data. The experiments here demonstrate a variety of ways to use CDEP to improve models both on real and toy datasets. CDEP is quite versatile and can be used in many more areas to incorporate the structure of domain knowledge (e.g. biology or physics). Of course, the effectiveness of CDEP depends upon the quality of the prior knowledge used to determine the explanation targets. Future work includes extending CDEP to more complex penalties, incorporating more fine-grained explanations and interactions. We hope the work here will help push the field towards a more rigorous way to use interpretability methods, a point which will become increasingly important as interpretable machine learning develops as a field (Doshi-Velez & Kim, 2017; Murdoch et al., 2019). S1 ADDITIONAL DETAILS ABOUT SST TASK Section 4.3 shows the results for CDEP on biased variants of the SST dataset. Here we show examples of the biased sentences (for task 2 and 3 we only show sentences where the bias was present) in Figs. S1 to S3. For the first task, we insert two randomly chosen words in 100% of the sentences in the positive and negative class respectively. We choose two words (“text” for the positive class and “video” for the negative class) that were not otherwise present in the data set but had a representation in Word2Vec. Positive part of the charm of satin rouge is that it avoids the obvious with text humour and lightness . text a screenplay more ingeniously constructed than ‘memento’ good fun text, good action, good acting, good dialogue, good pace, good cinematography . dramas like text this make it human . Negative ... begins with promise, but runs aground after being video snared in its own tangled plot . the video movie is well done, but slow . this orange has some juice , but it 's video far from fresh-squeezed . as it is, video it 's too long and unfocused . Figure S1: Example sentences from the variant 1 of the biased SST dataset with decoy variables in each sentence. For the second task, we choose to replace two common words (”the” and ”a”) in sentences where they appear (27% of the dataset). We replace the words such that one word only appears in the positive class and the other world only in the negative class. By choosing words that are semantically almost replaceable, we ensured that the normal sentence structure would not be broken such as with the first task. Positive comes off as a touching , transcendent love story . is most remarkable not because of its epic scope , but because of a startling intimacy couldn't be better as a cruel but weirdly likable wasp matron uses humor and a heartfelt conviction to tell that story about discovering your destination in life Negative to creep the living hell out of you holds its goodwill close , but is relatively slow to come to the point it 's not the great monster movie . consider the dvd rental instead Figure S2: Example sentences from the variant 2 of the SST dataset with artificially induced bias on articles (”the”, ”a”). Bias was only induced on the sentences where those articles were used (27% of the dataset). For the third task we repeat the same procedure with two words (“he” and “she”) that appeared in only 2% of the dataset. This helps evaluate whether CDEP works even if the spurious signal appears only in a small section of the data set. Positive pacino is the best she's been in years and keener is marvelous she showcases davies as a young woman of great charm , generosity and diplomacy shows she 's back in form , with an astoundingly rich film . proves once again that she's the best brush in the business Negative green ruins every single scene he's in, and the film, while it 's not completely wreaked, is seriously compromised by that i'm sorry to say that this should seal the deal - arnold is not, nor will he be, back . this is sandler running on empty , repeating what he 's already done way too often . so howard appears to have had free rein to be as pretentious as he wanted Figure S3: Example sentences from the variant 3 of the SST dataset with artificially induced bias on articles (”he”, ”she”). Bias was only induced on the sentences where those articles were used (2% of the dataset). S2 NETWORK ARCHITECTURES AND TRAINING S2.1 NETWORK ARCHITECTURES For the ISIC skin cancer task we used a pretrained VGG16 network retrieved from the PyTorch model zoo. We use SGD as the optimizer with a learning rate of 0.01 and momentum of 0.9. Preliminary experiments with Adam as the optimizer yielded poorer predictive performance. or both MNIST tasks, we use a standard convolutional network with two convolutional channels followed by max pooling respectively and two fully connected layers: Conv(20,5,5) - MaxPool() - Conv(50,5,5) - MaxPool - FC(256) - FC(10). The models were trained with Adam, using a weight decay of 0.001. Penalizing explanations adds an additional hyperparameter, λ to the training. λ can either be set in proportion to the normal training loss or at a fixed rate. In this paper we did the latter. We expect that exploring the former could lead to a more stable training process. For all tasks λ was tested across a wide range between [10−1, 104]. The LSTM for the SST experiments consisted of two LSTM layers with 128 hidden units followed by a fully connected layer. S2.2 COLORMNIST For fixing the bias in the ColorMNIST task, we sample pixels from the distribution of non-zero pixels over the whole training set, as shown in Fig. S4 Figure S4: Sampling distribution for ColorMNIST Figure S5: Results on ColorMNIST (Test Accuracy). All averaged over thirty runs. CDEP is the only method that captures and removes color bias. For Expected Gradients we show results when sampling pixels as well as when penalizing the variance between attributions for the RGB channels (as recommended by the authors of EG) in Fig. S5. Neither of them go above random accuracy, only achieving random accuracy when they are regularized to a constant prediction. S3 RUNTIME AND MEMORY REQUIREMENTS OF DIFFERENT ALGORITHMS This section provides further details on runtime and memory requirements reported in Table S1. We compared the runtime and memory requirements of the available regularization schemes when implemented in Pytorch. Memory usage and runtime were tested on the DecoyMNIST task with a batch size of 64. It is expected that the exact ratios will change depending on the complexity of the used network and batch size (since constant memory usage becomes disproportionally smaller with increasing batch size). The memory usage was read by recording the memory allocated by PyTorch. Since Expected Gradients and RRR require two forward and backward passes, we only record the maximum memory usage. We ran experiments on a single Titan X. Table S1: Memory usage and run time were recorded for the DecoyMNIST task. Unpenalized CDEP RRR Expected Gradients Run time/epoch (seconds) 4.7 17.1 11.2 17.8 Maximum GPU RAM usage (GB) 0.027 0.068 0.046 0.046 S4 IMAGE SEGMENTATION FOR ISIC SKIN CANCER To obtain the binary maps of the patches for the skin cancer task, we first segment the images using SLIC, a common image-segmentation algorithm (Achanta et al., 2012). Since the patches look quite distinct from the rest of the image, the patches are usually their own segment. Subsequently we take the mean RGB and HSV values for all segments and filtered for segments which the mean was substantially different from the typical caucasian skin tone. Since different images were different from the typical skin color in different attributes, we filtered for those images recursively. As an example, in the image shown in Fig. S6, the patch has a much higher saturation than the rest of the image. For each image we exported a map as seen in Fig. S6. Figure S6: Sample segmentation for the ISIC task. S5 ADDITIONAL HEATMAP EXAMPLES FOR ISIC We show additional examples from the test set of the skin cancer task in Figs. S7 and S8. We see that the importance maps for the unregularized and regularized network are very similar for cancerous images and non-cancerous images without patch. The patches are ignored by the network regularized with CDEP. Image Vanilla CDEP Figure S7: Heatmaps for benign samples from ISIC Image Vanilla CDEP Figure S8: Heatmaps for cancerous samples from ISIC A different spurious correlation that we noticed was that proportionally more images showing skin cancer will have a ruler next to the lesion. This is the case because doctors often want to show a reference for size if they diagnosed that the lesion is cancerous. Even though the spurious correlation is less pronounced (in a very rough cursory count, 13% of the cancerous and 5% of the benign images contain some sort of measure), the networks learnt to recognize and exploit this spurious correlation. This further highlights the need for CDEP, especially in medical settings. Image Vanilla CDEP Both networks learnt the non-penalized spurious correlation: Ruler -> Cancer Figure S9: Both networks learnt that proportionally more images with malignant lesions feature a ruler next to the lesion. To make comparison easier, we visualize the heatmap by multiplying it with the image. Visible regions are important for classification.
1. What is the main contribution of the paper regarding using generated explanations to improve model performance? 2. What are the concerns regarding the proposed method and experiment design? 3. Why does the reviewer think that the proposed method may not be necessary? 4. What is the reviewer's opinion on the experiment design, particularly in sections 4.2 and 4.3? 5. How does the reviewer assess the overall value of the work despite their concerns?
Review
Review The paper presents a way of using generated explanations of model predictions to help prevent a model from learning "unwanted" relationships between features and class labels. This idea was implemented with a particular explanation generation method from prior work, called contextual decomposition (CD). For a given feature, the corresponding CD can be used to measure its importance. The proposed learning objective in this work optimizes not only the cross entropy loss, but also the difference between the CD score of a given feature and its explanation target value. Experiments show that this new learning algorithm can largely improve the classification performance. I like the high-level idea of this work and agree that there is not much work on using prediction explanations to help improve model performance. However, there are two major concerns of the model and experiment design. First, it seems like the proposed method requires whoever use it already know what the problem is. For example, - in section 3.3, the model inputs include a collection of features and the corresponding explanation target values. - in section 4.1, it is already known that some colorful patches only appear in some non-cancerous images but not in cancerous images. - it is even more obvious in section 4.2 and 4.3, because in both experiments, the training and test examples were altered on purpose to create some mismatch. My question is that if we already know the bias or the mismatch, why not directly use this information in the regularization to penalize some features? Is it necessary to resort to some explanation generation methods? My second concern is more like a personal opinion. In the experiment of section 4.2, if the colors are good indicators of these digits in the training set, I don't it is wrong for a model to capture these important features. However, the way of altering examples in the same class with different colors in training and test sets seems questionable, because now, the distributions of training and test images are different. On the other hand, if we already know color is the issue, why not simply convert the images into black-and-white? A similar argument can also be applied to the experiment in section 4.3 Overall, I like the idea of using explanations to help build a better classifier. However, I am concerned about the value of this work.
ICLR
Title Interpretations are useful: penalizing explanations to align neural networks with prior knowledge Abstract For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods in order to increase the predictive accuracy of deep learning models. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by directly regularizing the provided explanations. Using explanations provided by contextual decomposition (CD) (Murdoch et al., 2018), we demonstrate the ability of our method to increase performance on an array of toy and real datasets. N/A For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods in order to increase the predictive accuracy of deep learning models. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by directly regularizing the provided explanations. Using explanations provided by contextual decomposition (CD) (Murdoch et al., 2018), we demonstrate the ability of our method to increase performance on an array of toy and real datasets. 1 INTRODUCTION In recent years, neural networks have demonstrated strong predictive performance across a wide variety of settings. However, in order to achieve that accuracy, they sometimes latch onto spurious correlations, leading to undesirable behavior as a result of dataset bias (Winkler et al., 2019), racial and ethnic stereotypes (Garg et al., 2018), or simply overfitting. While recent work into explaining neural network predictions (Murdoch et al., 2019; Doshi-Velez & Kim, 2017) has demonstrated an ability to uncover the relationships learned by a model, it is still unclear how to actually alter the model in order to remove incorrect, or undesirable, relationships. We introduce contextual decomposition explanation penalization (CDEP), a method which leverages existing explanation techniques for neural networks in order to prevent a model from learning unwanted relationships and ultimately improve predictive accuracy. Given particular importance scores, CDEP works by allowing the user to directly penalize importances of certain features, or interactions. This forces the neural network to not only produce the correct prediction, but also the correct explanation for that prediction. While we focus on contextual decomposition (CD) (Murdoch et al., 2018; Singh et al., 2018), which allows the penalization of both feature importances and interactions, CDEP can be readily adapted for existing interpretation techniques, as long as they are differentiable. Moreover, CDEP is a general technique, which can be applied to arbitrary neural network architectures, and is orders of magnitude faster and more memory efficient than recent gradient-based methods, allowing its use on meaningful datasets. In order to demonstrate the effectiveness of CDEP, we conducted experiments across a wide variety of tasks. In the prediction of skin cancer from images, CDEP improves the prediction of a classifier by teaching it to ignore spurious confounding variables present in the training data. In a colored MNIST task, CDEP allows the network to focus on a digit’s shape rather than its color (with no extra human annotation needed). Finally, a toy example using text classification shows how the penalization can help a network avoid a bias towards particular words, such as those involving gender. 2 BACKGROUND Explanation methods Many methods have been developed to help explain the learned relationships contained in a DNN. For local, or prediction-level, explanation, most prior work has focused on assigning importance to individual features, such as pixels in an image or words in a document. There are several methods that give feature-level importance for different architectures. They can be categorized as gradient-based (Springenberg et al., 2014; Sundararajan et al., 2017; Selvaraju et al., 2016; Baehrens et al., 2010; Rieger & Hansen, 2019), decomposition-based (Murdoch & Szlam, 2017; Shrikumar et al., 2016; Bach et al., 2015) and others (Dabkowski & Gal, 2017; Fong & Vedaldi, 2017; Ribeiro et al., 2016; Zintgraf et al., 2017), with many similarities among the methods (Ancona et al., 2018; Lundberg & Lee, 2017). However, many of these methods have thus far been poorly evaluated (Adebayo et al., 2018; Nie et al., 2018), casting doubt on their usefulness. Another line of work, which we build upon, has focused on uncovering interactions between features, in addition to feature importances, (Murdoch et al., 2018), and using those interactions to create a hierarchy of features displaying the model’s prediction process (Singh et al., 2018). Uses of explanation methods While much work has been put into developing methods for explaining DNNs, relatively little work has explored the potential to use these explanations to help build a better model. Some recent work proposes forcing models to attend to regions of the input which are known to be important (Burns et al., 2018; Mitsuhara et al., 2019), although it is important to note that attention is often not the same as explanation (Jain & Wallace, 2019). An alternative line of work proposes penalizing the gradients of a neural network to match human-provided binary annotations and shows the possibility to improve performance (Ross et al., 2017) and adversarial robustness (Ross & Doshi-Velez, 2018). Two recent papers extend these ideas by penalizing attributions for natural language models (Liu & Avci, 2019) and penalizing a modified gradient-based score to produce smooth attributions (Erion et al., 2019). Predating deep learning, (Zaidan et al., 2007) consider the use of “annotator rationales” in sentiment analysis to train support vector machines. This work on annotator rationales was recently extended to show improved explanations (not accuracy) in CNNs (Strout et al., 2019). Other ways to constrain DNNs While we focus on the use of explanations to constrain the relationships learned by neural networks, other approaches for constraining neural networks have also been proposed. A computationally intensive alternative is to augment the dataset in order to prevent the model from learning undesirable relationships, through domain knowledge (Bolukbasi et al., 2016), projecting out superficial statistics (Wang et al., 2019) or dramatically altering training images (Geirhos et al., 2018). However, these processes are often not feasible, either due to their computational cost or the difficulty of constructing such an augmented data set. Adversarial training has also been explored (Zhang & Zhu, 2019). These techniques are generally limited, as they are often tied to particular datasets, and do not provide a clear link between learning about a model’s learned relationships through explanations, and subsequently correcting them. 3 METHODS We now introduce CDEP, which penalizes the explanations of a neural network in order to align with prior knowledge about why a model should make a prediction. To do so, for each data point it penalizes the CD scores of features, or groups of features, which a user does not want the model to learn to be important. While we focus on CD scores, which allow the penalization of interactions between features in addition to features themselves, this approach readily generalizes to other interpretation techniques, so long as they are differentiable. 3.1 AUGMENTING THE LOSS FUNCTION Given a particular classification task, we want to teach a model to not only produce the correct prediction, but also to arrive at the prediction for the correct reasons. That is, we want the model to be right for the right reasons, where the right reasons are provided by the user and are datasetdependent. To accomplish this, CDEP modifies the objective function used to train a neural network, as displayed in Eq 1. In addition to the standard prediction loss L, which teaches the model to produce the correct predictions, CDEP adds an explanation error Lexpl, which teaches the model to produce the correct explanations for its predictions. In place of the prediction and labels fθ(X), y, used in the prediction error L, the explanation error Lexpl uses the explanations produced by an interpretation method explθ(X), along with targets provided by the user explX . As is common with penalization, the two losses are weighted by a hyperparameter λ ∈ R: θ̂ = argmin θ L (fθ(X), y)︸ ︷︷ ︸ Prediction error +λLexpl (explθ(X), explX)︸ ︷︷ ︸ Explanation error (1) The precise meanings of explX depend on the context. For example, in the skin cancer image classification task described in Section 4, many of the benign skin images contain band-aids, but none of the malignant images. To force the model to ignore the band-aids in making their prediction, in each image explθ(X) denotes the importance score of the band-aid and explX would be zero. These and more examples are further explored in Section 4. 3.2 CONTEXTUAL DECOMPOSITION (CD) In this work, we use the CD score as the explanation function. In contrast to other interpretation methods, which focus on feature importances, CD also captures interactions between features. CD was originally designed for LSTMs (Murdoch et al., 2018) and subsequently extended to convolutional neural networks and arbitrary DNNs (Singh et al., 2018). For a given DNN f(x), one can represent its output as a SoftMax operation applied to logits g(x). These logits, in turn, are the composition of L layers gi, such as convolutional operations or ReLU non-linearities. f(x) = SoftMax(g(x)) = SoftMax(gL(gL−1(...(g2(g1(x)))))) (2) Given a group of features {xj}j∈S , the CD algorithm, gCD(x), decomposes the logits g(x) into a sum of two terms, β(x) and γ(x). β(x) is the importance score of the feature group {xj}j∈S , and γ(x) captures contributions to g(x) not included in β(x). The decomposition is computed by iteratively applying decompositions gCDi (x) for each of the layers gi(x). gCD(x) = gCDL (g CD L−1(...(g CD 2 (g CD 1 (x)))))) = (β(x), γ(x)) (3) β(x) + γ(x) = g(x) (4) 3.3 CDEP OBJECTIVE FUNCTION We now substitute the above CD scores into the generic equation in Eq 1 to arrive at the method used in this paper. While we use CD for the explanation method explθ(X), other explanation methods could be readily substituted at this stage. In order to convert CD scores to probabilities, we apply a SoftMax operation to gCD(x), allowing for easier comparison with the user-provided labels explX . We collect from the user, for each input xi, a collection of feature groups xi,S , xi ∈ Rd, S ⊆ {1, ..., d}, along with explanation target values explxi,S , and use the ‖ · ‖1 loss for Lexpl. θ̂ = argmin θ ∑ i ∑ c − yi,c log fθ(xi)c︸ ︷︷ ︸ Classification error +λ ∑ i ∑ S ||β(xi,S)− explxi,S ||1︸ ︷︷ ︸ Explanation error (5) In the above, i indexes each individual example in the dataset, S indexes a subset of the features for which we penalize their explanations, and c sums over each class. Updating the model parameters in accordance with this formulation ensures that the model not only predicts the right output but also does so for the right (aligned with prior knowledge) reasons. 3.4 COMPUTATIONAL CONSIDERATIONS A similar idea to Eq 1 has been proposed in previous/concurrent work, where the choice of explanation method uses a gradient-based attribution method (Ross et al., 2017; Erion et al., 2019). However, using such methods leads to three main complications which are solved by our approach. The first complication is the optimization process. When optimizing over attributions from a gradientbased attribution method via gradient descent, the optimizer requires the gradient of the gradient, thus requiring that all network components be twice differentiable. This process is computationally expensive and indeed optimizing it exactly involves optimizing over a differential equation. In contrast, CD attributions are calculated along with the forward pass of the network, and as a result can be optimized plainly with back-propagation using the standard single forward-pass and backward-pass per batch. A second complication solved by the use of CD in Eq 5 is the ability to quickly finetune a pre-trained network. In many applications, particularly in transfer learning, it is common to finetune only the last few layers of a pre-trained neural network. Using CD, one can freeze early layers of the network and then finetune the last few layers of the network quickly as the activations and gradients of the frozen layers are not necessary. Third, penalizing gradient-based methods incurs a very large memory usage. Using gradient-based methods, training requires the storage of activations and gradients for all layers of the network as well as the gradient of input (which can be omitted in normal training). Even for the simplest version, based on saliency, this more than doubles the required memory for a given batch and network size. More advanced methods proved to be completely infeasible to apply to a real-life dataset used, since the memory requirements were too high. By contrast, penalizing CD only requires a small constant amount of memory more than standard training. 4 RESULTS The results here demonstrate the efficacy of CDEP on a variety of datasets using diverse explanation types. Sec 4.1 shows results on ignoring spurious patches in the ISIC skin cancer dataset (Codella et al., 2019), Sec 4.2 details experiments on converting a DNN’s preference for color to a preference for shape on a variant of the MNIST dataset (LeCun, 1998), and Sec 4.3 shows experiments on text data from the Stanford Sentiment Treebank (SST) (Socher et al., 2013).1 4.1 IGNORING SPURIOUS SIGNALS IN SKIN CANCER DIAGNOSIS In recent years, deep learning has achieved impressive results in diagnosing skin cancer, with predictive accuracy sometimes comparable to human doctors (Esteva et al., 2017). However, the datasets used to train these models often include spurious features which make it possible to attain high test accuracy without learning the underlying phenomena (Winkler et al., 2019). In particular, a popular dataset from ISIC (International Skin Imaging Collaboration) has colorful patches present in approximately 50% of the non-cancerous images but not in the cancerous images (Codella et al., 2019). An unpenalized DNN learns to look for these patches as an indicator for predicting that an image is benign. We use CDEP to remedy this problem by penalizing the DNN placing importance on the patches during training. 1All models were trained in PyTorch. The task in this section is to classify whether an image of a skin lesion contains (1) benign melanoma or (2) malignant melanoma. The ISIC dataset consists of 21,654 images (19,372 benign), each diagnosed by histopathology or a consensus of experts. For classification, we use a VGG16 architecture (Simonyan & Zisserman, 2014) pre-trained on the ImageNet Classification task 2 and freeze the weights of early layers so that only the fully connected layers are trained. In order to use CDEP, the spurious patches are identified via a s imple image segmentation algorithm using a color threshold (see Sec S4). Table 1 shows results comparing the performance of a model trained with and without CDEP. We report results on two variants of the test set. The first, which we refer to as “no patches” only contains images of the test set that do not include patches. The second also includes images with those patches. Training with CDEP improves the AUC and F1-score for both test sets. In the first row of Table 1, the model is trained using only the data without the spurious patches, and the second row shows the model trained on the full dataset. The network trained using CDEP achieves the best AUC, surpassing both unpenalized versions. Applying our method increases the ROC AUC as well as the best F1 score. We also compared our method against the method introduced in 2017 by Ross et al. (RRR). For this, we restricted the batch size to 16 (and consequently use a learning rate of 10−5) due to memory constraints. Using RRR did not improve on the base AUC, implying that penalizing gradients is not helpful in penalizing higher-order features.3 Visualizing explanations Fig. 3 visualize GradCAM heatmaps (Ozbulak, 2019; Selvaraju et al., 2017) for an unpenalized DNN and a DNN trained with CDEP to ignore spurious patches. As expected, after penalizing with CDEP, the DNN attributes less importance to the spurious patches, regardless of their position in the image. More examples, also for cancerous images, are shown in Sec S5. 2Pre-trained model retrieved from torchvision. 3We were not able to compare against the method recently proposed in Erion et al. (2019) due to the prohibitively slow training and large memory requirements. 4.2 COMBATING INDUCTIVE BIAS ON VARIANTS OF THE MNIST DATASET In this section, we investigate whether we can alter which features a DNN uses to perform digit classification, using variants of the MNIST dataset (LeCun, 1998) and a standard CNN architecture for this dataset retrieved from PyTorch 4. 4.2.1 COLORMNIST Similar to a previous study (Li & Vasconcelos, 2019), we transform the MNIST dataset to include three color channels and assign each class a distinct color, as shown in Fig. 4. An unpenalized DNN trained on this biased data will completely misclassify a test set with inverted colors, dropping to 0% accuracy (see Section 4.2.1), suggesting that it learns to classify using the colors of the digits rather than their shape. Here, we want to see if we can alter the DNN to focus on the shape of the digits rather than their color. Interestingly, this can be enforced by minimizing the contribution of pixels in isolation while maximizing the importance of groups of pixels (which can represent shapes). To do this, we add penalize the CD contribution of sampled single pixel values, following Eq 5. By minimizing the contribution of single pixels we effectively encourage the network to focus more on groups of pixels, which can represent shape. Section 4.2.1 shows that CDEP can partially change the network’s focus on solely color to also focus on digit shape. We compare CDEP to two previously introduced explanation penalization techniques: penalization of the squared gradients (RRR) (Ross et al., 2017) and Expected Gradients (EG) (Erion et al., 2019). For EG we penalize variance between attributions of the RGB channels 4Model and training code from https://github.com/pytorch/examples/blob/master/mnist/main.py. (as recommended by the authors of EG in personal correspondence). None of the baselines are able to improve the test accuracy of the model on this task above the random baseline, while CDEP is able to significantly improve this accuracy to 31.0%. We show the increase of predictive accuracy with increasing penalization in Fig. S5. 4.2.2 DECOYMNIST For further comparison with previous work, we evaluate CDEP on an existing task: DecoyMNIST (Erion et al., 2019). DecoyMNIST adds a class-indicative gray patch to a random corner of the image. This task is relatively simple, as the spurious features are not entangled with any other feature and are always at the same location (the corners). Table 3 shows that all methods perform roughly equally, recovering the base accuracy. Results are reported using the best penalization parameter λ, chosen via cross-validation on the test accuracy. We provide details on the computation time, and memory usage in Table S1, showing that CDEP is similar to existing approaches. However, when freezing early layers of a network and finetuning, CDEP very quickly becomes more efficient than other methods. 4.3 FIXING BIAS IN TEXT DATA To demonstrate CDEP’s effectiveness on text, we use the Stanford Sentiment Treebank (SST) dataset (Socher et al., 2013), an NLP benchmark dataset consisting of movie reviews with a binary sentiment (positive/negative). We inject spurious signals into the training set and train a standard LSTM 5 to classify sentiment from the review. 5Model and training code from https://github.com/clairett/pytorch-sentiment-classification. We create three variants of the SST dataset, each with different spurious signals which we aim to ignore (examples in Sec S1). In the first variant, we add indicator words for each class (positive: ‘text’, negative: ‘video’) at a random location in each sentence. An unpenalized DNN will focus only on those words, dropping to nearly random performance on the unbiased test set. In the second variant, we use two semantically similar words (‘the’, ‘a’) to indicate the class by using one word only in the positive and one only in the negative class. In the third case, we use ‘he’ and ‘she’ to indicate class (example in Fig 5). Since these gendered words are only present in a small proportion of the training dataset (∼ 2%), for this variant, we report accuracy only on the sentences in the test set that do include the pronouns (performance on the test dataset not including the pronouns remains unchanged). Table 4 shows the test accuracy for all datasets with and without CDEP. In all scenarios, CDEP is successfully able to improve the test accuracy by ignoring the injected spurious signals. 5 CONCLUSION In this work we introduce a novel method to penalize neural networks to align with prior knowledge. Compared to previous work, CDEP is the first of its kind that can penalize complex features and feature interactions. Furthermore, CDEP is more computationally efficient than previous work and does not rely on backpropagation, enabling its use with more complex neural networks. We show that CDEP can be used to remove bias and improve predictive accuracy on a variety of toy and real data. The experiments here demonstrate a variety of ways to use CDEP to improve models both on real and toy datasets. CDEP is quite versatile and can be used in many more areas to incorporate the structure of domain knowledge (e.g. biology or physics). Of course, the effectiveness of CDEP depends upon the quality of the prior knowledge used to determine the explanation targets. Future work includes extending CDEP to more complex penalties, incorporating more fine-grained explanations and interactions. We hope the work here will help push the field towards a more rigorous way to use interpretability methods, a point which will become increasingly important as interpretable machine learning develops as a field (Doshi-Velez & Kim, 2017; Murdoch et al., 2019). S1 ADDITIONAL DETAILS ABOUT SST TASK Section 4.3 shows the results for CDEP on biased variants of the SST dataset. Here we show examples of the biased sentences (for task 2 and 3 we only show sentences where the bias was present) in Figs. S1 to S3. For the first task, we insert two randomly chosen words in 100% of the sentences in the positive and negative class respectively. We choose two words (“text” for the positive class and “video” for the negative class) that were not otherwise present in the data set but had a representation in Word2Vec. Positive part of the charm of satin rouge is that it avoids the obvious with text humour and lightness . text a screenplay more ingeniously constructed than ‘memento’ good fun text, good action, good acting, good dialogue, good pace, good cinematography . dramas like text this make it human . Negative ... begins with promise, but runs aground after being video snared in its own tangled plot . the video movie is well done, but slow . this orange has some juice , but it 's video far from fresh-squeezed . as it is, video it 's too long and unfocused . Figure S1: Example sentences from the variant 1 of the biased SST dataset with decoy variables in each sentence. For the second task, we choose to replace two common words (”the” and ”a”) in sentences where they appear (27% of the dataset). We replace the words such that one word only appears in the positive class and the other world only in the negative class. By choosing words that are semantically almost replaceable, we ensured that the normal sentence structure would not be broken such as with the first task. Positive comes off as a touching , transcendent love story . is most remarkable not because of its epic scope , but because of a startling intimacy couldn't be better as a cruel but weirdly likable wasp matron uses humor and a heartfelt conviction to tell that story about discovering your destination in life Negative to creep the living hell out of you holds its goodwill close , but is relatively slow to come to the point it 's not the great monster movie . consider the dvd rental instead Figure S2: Example sentences from the variant 2 of the SST dataset with artificially induced bias on articles (”the”, ”a”). Bias was only induced on the sentences where those articles were used (27% of the dataset). For the third task we repeat the same procedure with two words (“he” and “she”) that appeared in only 2% of the dataset. This helps evaluate whether CDEP works even if the spurious signal appears only in a small section of the data set. Positive pacino is the best she's been in years and keener is marvelous she showcases davies as a young woman of great charm , generosity and diplomacy shows she 's back in form , with an astoundingly rich film . proves once again that she's the best brush in the business Negative green ruins every single scene he's in, and the film, while it 's not completely wreaked, is seriously compromised by that i'm sorry to say that this should seal the deal - arnold is not, nor will he be, back . this is sandler running on empty , repeating what he 's already done way too often . so howard appears to have had free rein to be as pretentious as he wanted Figure S3: Example sentences from the variant 3 of the SST dataset with artificially induced bias on articles (”he”, ”she”). Bias was only induced on the sentences where those articles were used (2% of the dataset). S2 NETWORK ARCHITECTURES AND TRAINING S2.1 NETWORK ARCHITECTURES For the ISIC skin cancer task we used a pretrained VGG16 network retrieved from the PyTorch model zoo. We use SGD as the optimizer with a learning rate of 0.01 and momentum of 0.9. Preliminary experiments with Adam as the optimizer yielded poorer predictive performance. or both MNIST tasks, we use a standard convolutional network with two convolutional channels followed by max pooling respectively and two fully connected layers: Conv(20,5,5) - MaxPool() - Conv(50,5,5) - MaxPool - FC(256) - FC(10). The models were trained with Adam, using a weight decay of 0.001. Penalizing explanations adds an additional hyperparameter, λ to the training. λ can either be set in proportion to the normal training loss or at a fixed rate. In this paper we did the latter. We expect that exploring the former could lead to a more stable training process. For all tasks λ was tested across a wide range between [10−1, 104]. The LSTM for the SST experiments consisted of two LSTM layers with 128 hidden units followed by a fully connected layer. S2.2 COLORMNIST For fixing the bias in the ColorMNIST task, we sample pixels from the distribution of non-zero pixels over the whole training set, as shown in Fig. S4 Figure S4: Sampling distribution for ColorMNIST Figure S5: Results on ColorMNIST (Test Accuracy). All averaged over thirty runs. CDEP is the only method that captures and removes color bias. For Expected Gradients we show results when sampling pixels as well as when penalizing the variance between attributions for the RGB channels (as recommended by the authors of EG) in Fig. S5. Neither of them go above random accuracy, only achieving random accuracy when they are regularized to a constant prediction. S3 RUNTIME AND MEMORY REQUIREMENTS OF DIFFERENT ALGORITHMS This section provides further details on runtime and memory requirements reported in Table S1. We compared the runtime and memory requirements of the available regularization schemes when implemented in Pytorch. Memory usage and runtime were tested on the DecoyMNIST task with a batch size of 64. It is expected that the exact ratios will change depending on the complexity of the used network and batch size (since constant memory usage becomes disproportionally smaller with increasing batch size). The memory usage was read by recording the memory allocated by PyTorch. Since Expected Gradients and RRR require two forward and backward passes, we only record the maximum memory usage. We ran experiments on a single Titan X. Table S1: Memory usage and run time were recorded for the DecoyMNIST task. Unpenalized CDEP RRR Expected Gradients Run time/epoch (seconds) 4.7 17.1 11.2 17.8 Maximum GPU RAM usage (GB) 0.027 0.068 0.046 0.046 S4 IMAGE SEGMENTATION FOR ISIC SKIN CANCER To obtain the binary maps of the patches for the skin cancer task, we first segment the images using SLIC, a common image-segmentation algorithm (Achanta et al., 2012). Since the patches look quite distinct from the rest of the image, the patches are usually their own segment. Subsequently we take the mean RGB and HSV values for all segments and filtered for segments which the mean was substantially different from the typical caucasian skin tone. Since different images were different from the typical skin color in different attributes, we filtered for those images recursively. As an example, in the image shown in Fig. S6, the patch has a much higher saturation than the rest of the image. For each image we exported a map as seen in Fig. S6. Figure S6: Sample segmentation for the ISIC task. S5 ADDITIONAL HEATMAP EXAMPLES FOR ISIC We show additional examples from the test set of the skin cancer task in Figs. S7 and S8. We see that the importance maps for the unregularized and regularized network are very similar for cancerous images and non-cancerous images without patch. The patches are ignored by the network regularized with CDEP. Image Vanilla CDEP Figure S7: Heatmaps for benign samples from ISIC Image Vanilla CDEP Figure S8: Heatmaps for cancerous samples from ISIC A different spurious correlation that we noticed was that proportionally more images showing skin cancer will have a ruler next to the lesion. This is the case because doctors often want to show a reference for size if they diagnosed that the lesion is cancerous. Even though the spurious correlation is less pronounced (in a very rough cursory count, 13% of the cancerous and 5% of the benign images contain some sort of measure), the networks learnt to recognize and exploit this spurious correlation. This further highlights the need for CDEP, especially in medical settings. Image Vanilla CDEP Both networks learnt the non-penalized spurious correlation: Ruler -> Cancer Figure S9: Both networks learnt that proportionally more images with malignant lesions feature a ruler next to the lesion. To make comparison easier, we visualize the heatmap by multiplying it with the image. Visible regions are important for classification.
1. What is the focus of the paper regarding prediction models? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its regularizer and reliance on prior knowledge? 3. How does the reviewer assess the novelty and effectiveness of the proposed method compared to other works? 4. Are there any concerns regarding the applicability and usefulness of the method when considering different types of data and prior knowledge?
Review
Review The authors propose to add a regularizer to the loss function when training a prediction model. In particular, the regularizer considers explanations during the model training; if the explanations are not consistent with some prior knowledge, then explanation errors will be introduced. The motivation for the proposed research is interesting and has some merit. However, I am a bit worried that the proposed approach is somewhat ad hoc. I can imagine there are various explanations that can be generated for the same model. There can also be different prior knowledge available for a particular problems. Which prior knowledge and explanations to use seem to affect a lot about the learned model. But there is no principled approaches for making the selection. In some sense, standard regularizers such as L1 or L2 are are intrinsic regularizers, while the proposed regularizer is extrinsic regularizer. I think the extrinsic regularizer certainly has some merit, but it is also hard to regulate. For instance, consider the example in Figure 2 about the presence of patches. Isn't that a too specific knowledge about the dataset, which in turn makes the proposed approach not general? I have doubts on how useful a method is if it relies on such specific prior knowledge about the data.
ICLR
Title Learning Debiased Representations via Conditional Attribute Interpolation Abstract An image is usually described by more than one attribute like “shape” and “color”. When a dataset is biased, i.e., most samples have attributes spuriously correlated with the target label, a Deep Neural Network (DNN) is prone to make predictions by the “unintended” attribute, especially if it is easier to learn. To improve the generalization ability when training on such a biased dataset, we propose a χmodel to learn debiased representations. First, we design a χ-shape pattern to match the training dynamics of a DNN and find Intermediate Attribute Samples (IASs) — samples near the attribute decision boundaries, which indicate how the value of an attribute changes from one extreme to another. Then we rectify the representation with a χ-structured metric learning objective. Conditional interpolation among IASs eliminates the negative effect of peripheral attributes and facilitates retaining the intra-class compactness. Experiments show that χ-model learns debiased representation effectively and achieves remarkable improvements on various datasets. (*This is the modified version with a dark blue mark.) 1 INTRODUCTION Deep neural networks (DNNs) have emerged as an epoch-making technology in various machine learning tasks with impressive performance (LeCun et al., 2015; Bengio et al., 2021). In some real applications, an object may possess multiple attributes, and some of them are only spuriously correlated to the target label. For example, in Figure 1, the intrinsic attribute of an image annotated by “lifeboats” is its shape. Although there are many lifeboats colored orange, a learner can not make predictions through the color, i.e., there is a misleading correlation from attribute as one containing “orange” color is the target “lifeboats”. When the major training samples can be well discerned by such peripheral attribute, especially learning on it is easier than on the intrinsic one, a DNN is prone to bias towards that “unintended” bias attribute (Torralba & Efros, 2011; Khosla et al., 2012; Tommasi et al., 2015; Geirhos et al., 2019; Brendel & Bethge, 2019; Xiao et al., 2021; Singla & Feizi, 2022), like recognizing a “cyclist” wearing orange as a “lifeboat”. Similar spurious attribute also exists in various applications such as recommendation system (Cañamares & Castells, 2018; Morik et al., 2020; Zhang et al., 2021b) and neural language processing (Zhao et al., 2017; He et al., 2019; Selvaraju et al., 2019; Mendelson & Belinkov, 2021; Guo et al., 2022). Given such a biased training dataset, how to get rid of the negative effect of the misleading correlations? One intuitive solution is to perform special operations on those samples highly correlated to the bias attributes, which requires additional supervision, such as the pre-defined bias type (Kim et al., 2019; Wang et al., 2020; Agarwal et al., 2020; Goel et al., 2021; Tartaglione et al., 2021; Geirhos et al., 2019; Bahng et al., 2020; Minderer et al., 2020; Li et al., 2021). Since prior knowledge of the dataset bias requires expensive manual annotations and is naturally missing in some applications, learning a debiased model without additional supervision about bias is in demand. Nam et al. Nam et al. (2020) identify samples with intrinsic attributes based on the observation that malignant bias attributes are often easier-to-learn than others. Then the valuable samples for a debiasing scheme could be dynamically reweighted or augmented (Geirhos et al., 2019; Minderer et al., 2020; Lee et al., 2021). However, the restricted number of such samples implies uncertain representations and limits its ability to assist in debiasing. To leverage more valuable-for-debiasing knowledge, we take a further step in analyzing the representation space of naïvely-training dynamics, especially focusing on the discrepancies in attributes with a different learning difficulty. As we will later illustrate in Figure 2, an attribute-based DNN pushes and fits on the easier bias attribute initially. The intrinsic attribute is then forced to shift in a “lazy” manner. The bias attribute that is pushed away first leaves a large margin boundary. Since the space of the other intrinsic attribute is filled with many different samples on bias attribute, it has a large intra-class variance, like a “hollow”. The representation is biased toward one side of the “hollow”, i.e., those samples aligned with the bias attribute. Without the true intra-class structure, the model becomes biased. 救生艇_0.5703651905059814-串联自行车_0.1379992961883545-玩具店_0.08981618285179138-山地自行车_0.04711327701807022-橄榄球头盔 _0.03929492458701134 47.1% amphibian 28.1% lifeboat 18.1% speedboat 独木舟_0.2981044054031372-快艇_0.19624267518520355-水陆两用车_0.17111073434352875-船桨 _0.13951753079891205-湖边_0.08463700115680695 救生艇_0.9784454107284546-灯塔_0.004665153566747904-集装箱船_0.003798476653173566-码头 _0.003606501966714859-消防船_0.0030068345367908478 (b) Green lifeboat From the above observation, it is crucial to fill the intra-class “hollow” and remodel the representation compactness. Notice that the samples shifting to the two sides of the “hollow” have different characteristics, as aligned with the bias attribute and conflicting with it, respectively. We can find samples with an intermediate attribute state between the above two samples. We call this type of sample the Intermediate Attribute Samples (IASs) which are near the decision boundary. When we condition (fix) on the intrinsic attribute, IASs vary on the other bias attribute and are exactly located in the “hollow” with low-density structural knowledge. Further, we can mine different samples, including IASs, based on the distinct training dynamics. To this end, we propose our two-stage χ2-model. In the first stage, we train a vanilla model on the biased dataset and record the sample-wise training dynamics w.r.t. both the target and the most obvious non-target classes (as the bias ones) along the epochs. An IAS is often predicted as a non-target class in the beginning and then switched to its target class gradually, making its dynamics plot a χ-shape. Following this observation, we design a χ-shape pattern to match the training samples. The matching score ranks the mined samples according to the bias level, i.e., how much they are biased towards the side of the bias attribute. Benefiting from the IASs, we conduct conditional attribute interpolation, i.e., fixing the value of the target attribute. We interpolate the class-specific prototypes around IASs with various bias ratios. These conditional interpolated prototypes precisely “average out” on the bias attribute. From that, we design a χ-structured metric learning objective. It pulls samples close to those same-class interpolated prototypes, then intra-class samples become compact, and the influence of the bias attribute is removed. Our χ2-model learns debiased representation effectively and achieves remarkable improvements on various datasets. Our contributions are summarized as • We claim and verify that Intermediate Attribute Samples (IASs) distributed around attribute decision boundaries facilitate learning a debiased representation. • Based on the diverse learning behavior of different attribute types, we mine samples with varying bias levels, especially IASs. From that, we interpolate bias attribute conditioned on the intrinsic one and compact intra-class samples to remove the negative effect of bias. • Experiments on benchmarks and a newly constructed real-world dataset from NICO (He et al., 2021) validate the effectiveness of our χ2-model in learning debiased representations. 2 A CLOSER LOOK AT LEARNING WITH THE BIAS ATTRIBUTE After the background of learning on a biased dataset, we analyze the training dynamics of the model. 2.1 PROBLEM DEFINITION Given a training set Dtrain = {(xi, yi)}Ni=1, each sample xi is associated with a class label y ∈ {1, 2, · · · , C}. We aim to find a decision rule hθ that maps a sample to its label. hθ is optimized by fitting all the training samples, e.g., minimizing the cross entropy loss as follows: LCE = E(xi,yi)∼Dtrain [− log Pr (hθ(xi) = yi | xi)] . (1) We denote hθ = argmaxc∈[C] w ⊤ c fϕ(x), where fϕ → Rd is the feature extraction network and {wc}c∈[C] is top-layer C-class classifier. The θ represents the union of learnable parameters ϕ and w. We expect the learned hθ to have the high discerning ability over the test set Dtest which has the same form as the training set Dtrain. In addition to its class label, a sample could be described based on various attributes. If an attribute is spuriously correlated with the target label, we name it non-target bias attribute ab. The attribute that intrinsically determines the class label is the target attribute ay. For example, when we draw different handwritten digits in the MNIST dataset with specific colors (Kim et al., 2019), the color attribute will not help in the model generalization since we need to discern digits by the shape, e.g., “1” is like the stick. However, if almost all training images labeled “1” are in the same “yellow” color, the decision rule image in “yellow” is digit “1” will perform well on a such biased training set. In the task of learning with a biased training set (Li & Vasconcelos, 2019; Kim et al., 2019; Nam et al., 2020), the bias attribute ab on most of the same-class samples are consistent, and spuriously correlated with the target label (as example digit “1” in “yellow” above), so a model hθ that relies on ab or the target attribute ay will both perform well on Dtrain. In real-world applications, it is often easier to learn to rely on ab than on ay, such as “background” or “texture” is easier-to-learn than the object (Shah et al., 2020; Xiao et al., 2021). Therefore a model is prone to recognize based on the ab. Such a simplicity bias (Arpit et al., 2017; Palma et al., 2019; Pérez et al., 2019; Shah et al., 2020) dramatically hurts the generalization of an unbiased test set. Nam et al. Nam et al. (2020) also observe that the loss dynamics indicate the easier ab is learned first, where the model is distracted and fails to learn ay . Based on the behaviors of the “ultimate” biased model, samples in Dtrain are split into two sets. Those training samples that could be correctly predicted based on the bias attribute ab are named as Bias-Aligned (BA) samples (as example “yellow digit 1” above), while the remaining ones are Bias-Conflicting (BC) samples (as digit “1” of other colors). The number of BC samples is extremely small, and previous methods emphasize their role with various strategies (Geirhos et al., 2019; Nam et al., 2020; Minderer et al., 2020; Lee et al., 2021). For additional related methods of learning a debiased model (Li & Vasconcelos, 2019; Clark et al., 2019; Sagawa et al., 2020; Cheng et al., 2021; Hendricks et al., 2018; Wang et al., 2019; Cadène et al., 2019; Arjovsky et al., 2019; Zhu et al., 2021; Liu et al., 2021; Kim et al., 2022; Kirichenko et al., 2022) please see the Appendix B. 2.2 THE TRAINING DYNAMICS WHEN LEARNING ON A BIASED TRAINING SET We analyze the training dynamics of a naïvely trained model in Eq. 1 on the Colored MNIST dataset. The non-target bias attribute is the color and the target attribute is the shape. For visualization shown in Figure 2, we set the output dimension of the penultimate layer as two. In addition to the learned classifier on shape attribute ay , simultaneously, we add another linear classifier on top of the embedding to show how the decision boundary of color attribute ab changes. More details are described in the supplementary material. Focusing on the precedence relationship for learning ay and ab, we have the following observations: • The easier-to-learn bias attribute color is fitted soon. The early training stage is shown in the first column. Both color and shape attribute classifiers discern by different colors and do correctly on almost all BA samples (red “0” and blue “2”, about 95% of the training set). • The target attribute shape is learned later in a “lazy” manner. To further fit all shape labels, the model focuses on the limited BC samples (blue “0” and red “2”, correspondingly about 5%) that cannot be perfectly classified by color. It pushes minor BC representations to the other (correct) side instead of adjusting the decision boundary. • The ahead-color and lagged-shape learning process leaves a large margin of color attribute boundary, which further triggers the shape attribute intra-class “hollow”. Because the representation of different colors is continuously pushed away (classified) before that of the shape, the gaps between different color attribute clusters are significantly larger than that of the shape attribute. • Since there is an intra-class “hollow” between BA and BC samples which is conditioned on a particular shape, the true class representation is deviated toward color. The fourth column shows that the training class centers (yellow stars) and the test ones (gray stars) are mismatched. The true class center is located in the low-density “hollow” between shape-conditioned BA and BC samples. Previous observations indicate that the before-and-latter learning process on attributes of different learning difficulties leaves the model to lose intra-class compactness, primarily when learning relying on the bias attribute is easier. To alleviate class center deviation towards the BA samples, only emphasizing the BC samples is insufficient due to their scarcity. In addition, we propose to utilize Intermediate Attribute Samples (IASs), i.e., the samples near the attribute decision boundary and remodel the shifted representation. Especially when conditioned on the target attribute, the IASs vary on bias attribute and fill in the low-density intra-class “hollow” between BA and BC samples. Under review as a conference paper at ICLR 2023 第三个图 (暂定不变, 可以美化) 3 χ2-MODEL To mitigate the representation deviation and compact the intra-class “hollow”, we leverage IASs to encode how the bias attribute changes from one extreme (major BA samples) to another (minor BC samples). Then, the variety of the bias attribute could be interpolated when conditioned on a particular target attribute. We propose our two-stage χ2-model, whose notion is illustrated in Figure 3. First, the χ2-model discovers IASs based on the training dynamics of the vanilla model in subsection 3.1. Next, we analyze where the top-ranked samples with a χ-shape pattern are as well as their effectiveness in debiasing in subsection 3.2. A conditional attribute interpolation step with IASs then fills in the low-density “hollow” to get a better estimate of the class-specific prototypes. By pulling the samples to the corresponding prototype, the χ-structured metric learning makes intra-class samples compact in subsection 3.3. Following subsection 2.2, we investigate the Colored MNIST dataset. Results on other datasets are consistent. 3.1 SCORING SAMPLES WITH A χ-SHAPE PATTERN From the observations in the previous section, we aim to collect IASs to reveal how BC samples shift and leave the intra-class “hollow” between them and BA ones. As discussed in subsection 2.2, the vanilla model fits BC samples later than BA ones, which motivates us to score the samples from their training dynamics. Once we have the score pattern to match and distinguish BA and BC samples, IASs, with intermediate scores can be extracted and available for the next debiasing stage. In the following, we denote the posterior of the Ground-Truth class (GT-class) yi for a sample xi as Pr (hθ(xi) = yi | xi) = softmax(w⊤c fϕ(x))yi , (2) the larger the posterior, the more confident a model predicts xi with yi. For notation simplicity, we abbreviate the posterior as Pr (yi | xi). The target posterior of a BA sample reaches one or becomes much higher than other categories soon after training several epochs, while the posterior of a BC sample has a delayed increase. To sufficiently capture the clues on the change of bias attribute, we also analyze the posterior of the most obvious non-GT attribute, which reveals how the dataset bias influences a sample. Denote the model at the t-th epoch plus the superscript t, such as htθ . we take the bias class for the sample xi at epoch t as bti = argmaxc∈[C],c ̸=yi ( w⊤c fϕ(x) )t . Then, we define the non-GT bias class as the most frequent bti along all epochs, i.e., bi = max_freq{bti}Tt=1. A sample has a larger bias class posterior when it has low confidence in its target class and vice versa. Taking posteriors of both yi and bi into account, a BA sample has larger Pr (yi | xi) and small Pr (bi | xi) along all its training epochs. For a BC sample, Pr (yi | xi) increases gradually and meanwhile Pr (bi | xi) decreases. We verify the phenomenon on Colored MNIST dataset in Figure 4 (left). For BA samples (yellow “1”), the two curves demonstrate a “rectangle”, while for BC samples (blue “1”), the two curves have an obvious intersection and reveal a “χ” shape. The statistics for the change of posteriors are shown in Figure 4 (right). Therefore, how much the training dynamics match the “χ” shape reveals the probability of a sample that shifts from the major BA clusters to minor BC ones. We design a χ-shape for the dynamics of losses to capture such BC-specific properties. The change of sample-specific loss for ground-truth label and bias label over T epochs could be summarized by LCE. Then, we use two exponential χ-shape functions χpattern to capture the ideal loss shape of the BC sample, i.e., the severely shifted case. LCE (xi) = ( LgtCE (xi) = { − log Prt (yi | xi) }T t=1 LbCE (xi) = { − log Prt (bi | xi) }T t=1 ) , χpattern = ( pgt = { e−A1t }T t=1 pb = { eA2t }T t=1 ) , where A1 and A2 are the matching factors. They could be determined based on the dynamics of prediction fluctuations. For more details please see the supplementary material. The χpattern encodes the observations for the most deviated BC samples. To match the loss dynamics with the pattern, we use the inner product over the two curves: s(xi) = ⟨LCE (xi) , χshape⟩ = ⟨LgtCE (xi) ,p gt⟩+ ⟨LbCE (xi) ,pb⟩ (3) = T∑ t=1 −e−A1t · log Pr (hθ (xi) = yi | xi)− eA2t · log Pr (hθ (xi) = bi | xi) . The inner product s(xi) takes the area under the curves (AUC) into account, which is more robust w.r.t. the volatile loss changes. When s(xi) score goes from low to high, the sample varies on the bias level, i.e., from BA samples to IASs, and then to BC samples. 3.2 WHERE IASS ARE AND WHY IASS CAN HELP TO LEARN A DEBIASED MODEL? Combining the analysis in subsection 2.2 and collecting ranked samples by s(xi), we find there are two types of IASs according to the representation near the target attribute decision boundary (as “0” for complex shapes in Figure 3), or that of the bias attribute (as helicopter in intermediate transitional “sunset” background). (1) If an IAS has an intermediate target attribute value, it may be a difficult samples and contains rich information about the target class boundaries. (2) If an IAS is in an intermediate state on the bias attribute, it may help to fill in the intra-class vacant “hollow” when conditioning (fixing) on the target attribute. Both types of IASs are similar to BC samples but from two directions, i.e., compared to the BA samples, they contain richer semantics on target or bias attributes. In the representation space, they are scattered between BA and BC samples, compensating for the sparsity of BC samples and valuable for debiasing. We will show how χ-structured objective with IASs help to remodel the true class centers in the following subsection. We illustrate the importance of IASs with simple experiments on biased Colored-MNIST and Corrupted CIFAR-10 datasets. Details of the datasets are described in subsection 4.1. We investigate whether various reweighting strategies on the vanilla model improve the generalization ability over an unbiased test set. We use “0-1” to denote the strategy that utilizes only the BC samples. “step-wise” means we apply uniformly higher (ratio of BA samples) and lower weights (one minus above ratio) to BC and BA samples. Our “χ-pattern” smoothly reweights all samples with the matched scores, where BC samples as well as IASs have relatively larger weights than the remaining BA ones. The results are listed in Table 1, where we find that simple reweighting strategies easily improve the performance of a vanilla classifier, which verifies the importance of emphasizing BC-like samples. Our “χ-pattern” gets the best results in most scenarios, indicating that higher resampling weights on the IASs and BC samples assist the vanilla model to better frame the representation space. 3.3 LEARNING DEBIASED REPRESENTATION FROM A χ-STRUCTURED OBJECTIVE Although the BA samples are severely biased towards the bias attribute, the BC samples, integrating the rich bias attribute semantics, naturally make the representation independent of the biased influence (Hong & Yang, 2021). An intuitive approach for debiasing is to average over BC samples and Biased Model Unbiased Model classified by the BC class centers. However, the sparsity of BC samples induces an erratic estimation which is far from the true class center, as shown in Figure 2. Benefiting from the analysis that BC-like IASs better estimate the intra-class structure, we target conditional interpolating around it, i.e., mixing the same-class samples with different BC-like scores to remodel the intermediate samples between BA and BC samples. From that, we can construct many prototypes closer to the real class center and pull samples to these prototypes to compact the intra-class space. Combined with the soft ranking score from the χ-pattern in the previous stage, we build two pools (subsets) of the samples denoted as D∥ and D⊥. The D⊥ pool collects the top-rank sampels and most of them are BC samples and IASs. The D∥ pool is sampled from the remaining (BA) part according to the score. With the help of D∥ and D⊥, we construct multiple bias bags (subset) Bγ with bootstrapping where the ratio of BC samples is γ. Bγ = { (xi, yi) ∣∣ num (D⊥) : num (D∥) = γ} , (4) where num (D) equals the number of samples in D. When γ is low to high, the Bγ contains samples ranging from the extremes of BA samples to the IASs, and then to the BC ones. Based on Bγ , we compute the prototype, i.e., averaged on Bγ to interpolate bias attribute conditioned on the particular target attribute. For example, the prototype conditioned on class c is formalized as pγ,c: pγ,c = 1 K ∑ (xi,yi)∈Bγ fϕ (xi) · I [yi = c] . (5) To further demonstrate the significance of intra-class compactness, we design the experiments to study the difference between a biased vanilla model and the unbiased oracle model (well-trained on an unbiased training set). We measure the mean distance between samples and their multiple conditional interpolated prototypes with changing ratio γ. If the prototypes are shifted with changing γ, that indicates a large intra-class deviation exists. As shown in Figure 5, for a biased model, when γ decreases, pγ is interpolated closer to the BA samples. Opposite phenomena are observed in the BC samples. As for the unbiased oracle model, no matter how the BC ratio γ changes, such mean distance is almost unchanged and shows a lower variance. This coincides with the observation in Figure 2. Motivated by mimicking the oracle, we adopt the conditional interpolated prototypes and construct a customized χ-structured metric learning task. Assuming γ is large, we use pγ and p1−γ to denote prototypes in bias bags B with high and low BC ratios. The model is required to be more concerned with pulling the majority of low BC ratio bias bag B1−γ closer to pγ , which interpolated into the high BC space. Similarly, the high BC bias bag Bγ should be pulled to low BC interpolated p1−γ . We optimize the cross-entropy loss LCE to enable the pulling operation. Concretely, the posterior via the distance d (·, ·) in the representation space is formalized as: Pr (yi | xi) = exp (−d (fϕ (xi) ,pγ,yi) /τ)∑ c∈[C] exp (−d (fϕ (xi) ,pγ,c) /τ) , (6) where τ is a scaled temperature. One of the branches of the χ-structure classification task is optimizing the LCE between samples in the B1−γ and pγ . Similarly, the other branch is optimizing between Bγ and p1−γ at the same time. As shown in Figure 3, such a high-and-low correspondence captures and compacts the intra-class “hollow”. In summary, The bias bags of high BC ratios Bγ with corresponding low BC interpolated prototypes conditioning on the target attribute p1−γ , and B1−γ with pγ form the χ-structure crossover objective. 4 EXPERIMENTS We conduct experiments to verify whether χ2-model has effective debiasing capability. We begin by introducing bias details in each dataset (as in subsection 4.1). We present the comparison approaches and training details. In subsection 4.2, the experiments show that χ2-model achieves superior performance in each stage. Furthermore, we experimentally exemplify the inherent quality of the prototype-based classification for the debiasing task and offer the ablation studies in subsection 4.3. 4.1 EXPERIMENTAL SETUPS Table 3: The classification performance on the unbiased CelebA and NICO test set. The data source BA denotes the measurement on BA samples and BC is corresponding the BC samples. Data Biased CelebA NICO Source BA BC All All LfF 73.69 70.41 72.05 34.44 DFA 94.01 58.98 76.50 33.10 χ2-model 97.66 60.79 79.23 36.99 Datasets. To cover more general and challenging cases of bias impact, we validate χ2-model in a variety of datasets, including two synthetic bias datasets (Colored MNIST (Bahng et al., 2020), Corrupted CIFAR-10 (Nam et al., 2020)) and two real-world datasets (Biased CelebA (Liu et al., 2015) and Biased NICO). The BA samples ratio ρ in the training set is usually high (over 95%), so the bias attribute is highly correlated with the target label. For example in the Colored MNIST dataset, each digit is associated with one of the pre-defined bias colors. Similarly, there is an object target with corruption bias in Corrupted CIFAR-10 and a gender target with hair color bias in Biased CelebA. Following the previous works (Hong & Yang, 2021), we use the BA ratio ρ ∈ {95.0%, 99.0%, 99.5%, 99.9%} for Colored MNIST and Corrupted CIFAR-10, respectively, and approximately 96% for Biased CelebA. The Biased NICO dataset is dedicatedly sampled in NICO (He et al., 2021), initially designed for OOD (Out-of-Distribution) image classification. NICO is enriched with variations in the object and context dimensions. We select the bias attribute with the highest co-occurrence frequency to the target one, e.g., helicopter to sunset in training set correlates strongly (see BA samples in Figure 3). The correlation ratio is roughly controlled to 86%. For more details please see the supplementary material. Baselines. We carefully select the classic and the latest trending approaches as baselines: (1) Vanilla model training with cross entropy as described in subsection 2.1. (2) Biastailored approaches with pre-provided bias type: RUBi, Rebias. (3) Explicit approaches under the guidance of total bias supervision: EnD and DI. (4) Implicit methods through general bias properties: LfF and DFA. Implementation details. Following the existing popular benchmarks (Hong & Yang, 2021; Kim et al., 2021), we use the four-layer CNN with kernel size 7 × 7 for the Colored MNIST dataset and ResNet-18 (He et al., 2016) for Corrupted CIFAR-10, Biased CelebA, and Biased NICO datasets. For a fair comparison, we re-implemented the baselines with the same configuration. We mainly focus on unbiased test accuracy for all categories. All models are trained on an NVIDIA RTX 3090 GPU. More details are in the supplementary material. Baselines for the first stage. To better demonstrate the effectiveness of χ-pattern, we consider related sample-specific scoring methods (Pleiss et al., 2020; Zhao et al., 2021) and report average precision, top-threshold accuracy, and the minimum samples (threshold) required for 98% accuracy. For more results, such as PR curves, please see the supplementary material. 4.2 QUANTITATIVE EVALUATION Table 4: The performance of BC samples mining on Colored MNIST with 99.5% BA ratio. Acc. denotes mean accuracy of ranking with top-300. 98%-σ denotes the number of samples required to contain 98% of BC samples. AP is average precision. ↑ means higher is better, while ↓ is the opposite. Measure Acc. ↑ 98%-σ ↓ AP ↑ Entropy(Joshi et al., 2009) 78.33 632 83.52 Confidence(Li & Sethi, 2006) 80.33 590 85.61 Loss(Nam et al., 2020) 94.39 418 98.22 Pleiss et al. (2020) 82.67 686 89.24 Zhao et al. (2021) 90.33 451 96.04 χ-pattern 95.84 372 98.44 Performance of χ-shape pattern. As shown in Table 4, our χ-pattern matching achieves state-of-the-art performance on various evaluation metrics. Thus, the χ-structure metric learning objective can leverage more IASs cues to interpolate bias attribute and further learn the debiased representation. χ2-model in different types of bias constructions. (1) Synthetic bias on Colored MNIST and Corrupted CIFAR-10: From Table 2 we find that under extreme bias influence, as ρ is 99.9%, the performance of the vanilla model and other baselines decreases catastrophically. In contrast, Our χ2-model maintains the robust and efficient debiasing capability on the unbiased dataset. Further, more results in Figure 2 present the remarkable performance of our χ2-model compared to other methods. (2) Real-world bias on Biased CelebA and Biased NICO: Table 3 shows that compared to the recent methods which do not pre-provide any bias information in advance as the same as ours, our method also achieves a remarkable performance. The above experiments indicate that conditional interpolation among IASs feedback the shift of the intrinsic knowledge and facilitate learning debiased representations even in extremely biased conditions. 4.3 FURTHER ANALYSIS The inherent debiasing capability of prototype-based classification. We directly construct the prototype by averaging the trained representations of the vanilla model (as in Table 2 line two named “+ p”). The results show that on some datasets like Colored MNIST, the prototype-based classifier without training achieves performance improvement. Visualize the test set representation on 2D embedding space via t-SNE. Figure 6 shows the 2D projection of the feature extracted by χ2-model on Colored MNIST. We color the target and bias attributes separately. The representations follow the target attribute to cluster into classes which indicates that our model learns the debiased representations. Ablation studies. We further perform the ablation analysis of the matching factors A1, A2 in Eq. ??, which directly determine the χ-shape curves. The results show the first stage of χ2-model is robust to changes in hyperparameters. For more related experiments like on different BC identification thresholds, please see the supplementary material. 5 CONCLUSION Although intra-class biased samples with a “hollow” structure impede learning debiased representations, we propose the χ2-model to leverage Intermediate Attribute Samples (IASs) to capture how the samples with intrinsic attribute shift. χ2-model works in a two-stage manner, matching and ranking possible IASs based on their χ-shape training dynamics followed by a χ-branch metric-based debiasing objective with conditional attribute interpolation. Appendix • Appendix A: An example of the color-biased model. • Appendix B: Additional related work (cf. subsection 2.1). • Appendix C: Implementation details and hyper-parameter settings (cf. subsection 4.1). • Appendix D Additional experiments, ablation studies, and robustness analysis. • Appendix E Overall algorithm. • Appendix F Discussion about the limitations. A AN EXAMPLE OF THE COLOR-BIASED MODEL ON orange LIFEBOAT 救生艇_0.5703651905059814-串联自行车_0.1379992961883545-玩具店_0.08981618285179138-山地自行车_0.04711327701807022-橄榄球头盔 _0.03929492458701134 独木舟_0.2981044054031372-快艇_0.19624267518520355-水陆两用车_0.17111073434352875-船桨 _0.13951753079891205-湖边_0.08463700115680695 救生艇_0.9784454107284546-灯塔_0.004665153566747904-集装箱船_0.003798476653173566-码头 _0.003606501966714859-消防船_0.0030068345367908478 B RELATED WORK FROM WHAT BIAS INFORMATION PROVIDED IN ADVANCE There are various methods of learning a debiased model from a biased training set. Debiasing under the guidance of bias supervision. This thread of methods introduces full explicit bias attribute supervision and an additional branch of the model to predict the label of the bias. Kim et al. (2019) leverage bias clues to minimize the mutual information between the representation and the bias attributes with gradient reversal layers (Ganin et al., 2016). Similarly, Li & Vasconcelos (2019) perform RGB vector as color side information to conduct the minimax bias mitigation. (Clark et al., 2019; Wang et al., 2020) utilize the auxiliary bias instruction to train the relevant independent models and ensemble their predictions. (Sagawa et al., 2020; Goel et al., 2021) balance the performance of bias subgroups over distribution shift. (Tartaglione et al., 2021; Cheng et al., 2021) directly regularize the bias attribute to disentangle the confused bias representations. Debiasing with bias prior knowledge. Many real-world applications limit access to sufficient bias supervision. However, a relaxed condition could be met to provide prior knowledge of the bias (e.g., bias type). Many methods highlight that the content bias type plays an important role in CNN object recognition (Hendricks et al., 2018; Geirhos et al., 2019; Li et al., 2021). Based on such observations, several approaches adopt the bias type to build a bias-capturing module. Wang et al. (2019) remove texture bias through latent space projection with gray-level co-occurrence matrix (Lam, 1996). Bahng et al. (2020) encourage the debiased model to learn independent representation from a designed biased one. Other approaches mitigate the dataset bias existing in natural language processing with logits re-weighting (Cadène et al., 2019). Debiasing through general intrinsic bias properties. Towards more practical applications, this line of methods takes full advantage of the bias property, which does not require either explicit bias supervision or pre-defined bias prior knowledge. Nam et al. (2020) make a comprehensive analysis on the properties of bias. The observations indicate a two-branch training strategy — a biased model trained with Generalized Cross-Entropy loss (Zhang & Sabuncu, 2018) amplifying its “prejudice” on BA samples, and a debiased model focuses more on samples that go against the prejudice of the biased one. Similarly, Lee et al. (2021) fit one of the encoders to the bias attribute and randomly swap the latent features to work as augmented BC samples. Other approaches also consider the model learning shortcuts revealed by the high gradients of latent vectors (Darlow et al., 2020; Huang et al., 2020). C IMPLEMENTATION DETAILS C.1 TRAINING DYNAMIC VISUALIZATION OF FIGURE 2 To visualize the 2D attribute boundary, we first add an extra linear projection layer wproj ∈ Rd×2 behind the feature extraction network and correspondingly modify the top-layer classifier wc to classify on 2D features. After training is completed, we directly present the 2D features of the data and the top-layer classifier in Figure 2. Secondly, to compare different attributes and feedback on their gradients fairly, we jointly train the attribute classifier with a shared feature extraction network. This ensures their features are consistent and comparable to the classifiers with different attributes. Figure 2 shows the results of the above model trained on Colored MNIST with a BA ratio of 0.95, where the learning rate is 0.00001. The two digit (shape) classes in the figure are 2 and 8. Correspondingly, the two color classes are purple and green. The samples 2 in purple and the 8 in green are BA ones. In contrast, the samples 2 in green and the 8 in purple are BC ones. The ratio of BA and BC samples is roughly 0.95. C.2 DATASETS Colored MNIST. Following most of the previous work (Nam et al., 2020; Hong & Yang, 2021; Lee et al., 2021), we construct the Colored MNIST by coloring each digit and keeping the background black, in other words, every target attribute digit in the Colored MNIST is highly correlated with a specific bias attribute color. The degree of severity we chose to calibrate the dataset bias difficulty was 1 as in previous works. The different bias-aligned (BA) ratios contains different BA samples, e.g., in the ratio of 99.9% we have 59940 BA samples and 60 bias-conflicting (BC) samples in the training set. Similarly, the ratio of 99.5% has {59940, 60} BA and BC samples, correspondingly. In the same way, for other ratios of BA and BC samples, the ratio of 99.0% is {58, 402; 598} and the ratio of 95.0% is {57, 000; 3000}. Corrupted CIFAR-10. For the Corrupted CIFAR dataset, we follow the earlier work (Lee et al., 2021) and choose 10 corruption types, i.e., {Snow, Frost, Fog, Brightness, Contrast, Spatter, Elastic, JPEG, Pixelate, Saturate}. The corruption type is highly correlated with the target ones as PLANE, CAR, BIRD, CAT, DEER, DOG, FROG, HORSE, SHIP, and TRUCK. Similarly, we choose severity 1 in the original paper (Nam et al., 2020). The number of BA samples and BC samples for each ratio of BA ones are: 99.9%-{49, 950; 50}, 99.5%-{49, 750; 250}, 99.0%-{49, 500; 500}, 95.0%- {47500; 2, 500}. Biased CelebA. Following the experimental configuration of previous works, We intentionally truncated a portion of the CelebA dataset so that each target attribute containing BlondHair or not was skewed towards the bias attribute of Male. The number of target bias, i.e., BlondHair-Male is as follows: BC samples like BlondHair equals 0 with Male equals 0 contains 1, 558 and {1 -1 : 1, 098}. The BA samples is {1 -0 : 18, 279} and {0 -1 : 53, 577}. Biased NICO. The Biased NICO dataset is dedicatedly sampled in NICO (He et al., 2021), which is originally designed for Non-I.I.D. or OOD (Out-of-Distribution) image classification. NICO is enriched with variations in the object and context dimensions. Concretely, there are two superclasses: Animal and Vehicle: with 10 classes as BEAR, BIRD, CAT, COW, DOG, ELEPHANT, HORSE, MONKEY, RAT and SHEEP for Animal, and 9 classes as AIRPLANE, BICYCLE, BOAT, BUS, CAR, HELICOPTER, MOTORCYCLE, TRAIN and TRUCK for Vehicle. Each object class has 9 or 10 contexts. We select the bias attribute with the highest co-occurrence frequency with the target one, i.e., DOG on snow, BIRD on grass, CAT eating, BOAT on beach, BEAR in forest, HELICOPTER in sunset, BUS in city, COW lying, ELEPHANT in river, MOTORCYCLE in street, MONKEY in water, TRUCK on road, RAT at home, BICYCLE with people, AIRPLANE aside mountain, SHEEP walking, HORSE running, CAR on track, TRAIN at station. The quantitative details of each class are shown in Table 5. Similarly, The details divided by bias attribute are shown in Table 6, The remaining bias attributes that do not appear in the BA samples are: {at wharf, at airport, aside traffic light, eating grass, white, in cage, in hole, in garage, cross bridge, at park, yacht, flying, aside tree, black, standing, sitting, at night, double decker, on sea, around cloud, with pilot, in sunrise, in hand, on booth, aside people, at sunset, brown, on shoulder, spotted, subway, in race, climbing, cross tunnel, velodrome, on bridge, shared, at yard, in circus, on ground, on tree, at heliport, taking off, on branch, wooden, sailboat, in zoo}, which are few in number, about 4 of each. In the test set they are balanced with the remaining bias attributes. The training set’s total correlation ratio is roughly 86.27%. C.3 DATA PRE-PROCESSING The image sizes of Colored MNIST and Corrupted CIFAR-10 are 28× 28 and 32× 32, respectively. We feed the original images into the model and do not use data augmentation transformations during training and testing. We directly normalize the data from Colored MNIST and Corrupted CIFAR-10 by the mean of (0.5, 0.5, 0.5) and the standard deviation of (0.5, 0.5, 0.5). In the real-world datasets, for the training phase of Biased CelebA, we first resize the images to a size of 224× 224, and then apply the RandomHorizontalFlip transformation. As for the Biased NICO dataset, following most of the previous works (Zhang et al., 2021a), we append the RandomHorizontalFlip, ColorJitter, RandomGrayscale transformations after the RandomResizedCrop to 224× 224. For both of them, during the test, we only resize the images. We normalize these real-world datasets by the mean of (0.485, 0.456, 0.406) and the standard deviation of (0.229, 0.224, 0.225). C.4 TRAINING DETAILS Our code is based on the PyTorch library. Following the previous work (Hong & Yang, 2021), We use the four-layer convolutional neural network with kernel size 7× 7 for the Colored MNIST dataset and ResNet-18 (He et al., 2016) for Corrupted CIFAR-10, Biased CelebA, Biased NICO datasets. For all methods and datasets, we do not consider loading any additional pretrained weights to allow the models represent the pure debiasing capability. In the training phase, we use Adam optimizer and cosine annealing learning rate scheduler. For all datasets, the batch size is selected from {64, 128, 256}. Correspondingly, the learning rate is from {0.0001, 0.0005, 0.001, 0.005}, and the smaller ones are used for training the vanilla model. For all methods, including the reproduced comparison ones, we train the model for 200 epochs on Colored MNIST, Corrupted CIFAR-10, while training 50 and 100 epochs on Biased CelebA and Biased NICO, respectively. χ2-model. • For the first stage: We train the 1000 epochs vanilla model with a learning rate 1e-5 on Colored MNIST, 5e-3 on Corrupted CIFAR-10, 5e-5 on Biased CelebA and 1e-3 on Biased NICO to extract the training dynamics. In practice, we design the Area Under Score (AUS) strategy to capture the training dynamics. All comparison methods leverage epoch-specific scores, and AUS applies to these methods, e.g., Loss is the calculated all the epoch-level loss summations. We generally use the ratio of divided BC samples as a hyperparameter. We find that a slightly larger BC ratio brings better results in our experiments, as detailed in Table 8. In addition, for the IASs importance verification experiments in Table 1, the “step-wise” setting indicates we apply uniformly higher and lower sampling weights to BC and BA samples. The unified weights are related to the BA ratio ρ in the whole dataset, i.e., the weight on BC samples is ρ and on the BA ones is 1− ρ. • In the second stage: χ-branch metric learning objective, we first construct the data pools D∥ and D⊥ with the ranking. The BC identification threshold to split those two data pools can be adjusted to a suitable value without knowing the ground-truth dataset BC ratio. To observe the IASs validity and unify the style, we report the results one level higher in {0.999, 0.995, 0.99, 0.95} than the dataset BC ratio in the main text, i.e., the threshold is 0.99 if the dataset BC ratio is 0.995. See more details in subsection D.4 and Table 14. We construct different ratios of bias bags {Bγ , B1−γ } and mixed prototypes {pγ , p1−γ } by bootstrapped sampling a batch containing almost the same number of BA and BC samples using the first stage χ-pattern score (described in subsection 3.3). The more numerous part is the one that contains all the samples in that part of the batch, e.g., for a large γ with a majority of the BC part, Bγ contains all the BC samples in the above batch. In this case, the remaining 1 − γ ratio of BA samples are sampled uniformly in the batch. The mixed prototype pγ and p1−γ are extracted and constructed similarly. The mixed ratios γ are from {0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}. As described in the paper, the computation of LCE (pγ , B1−γ ) and LCE (p1−γ , Bγ ) will yield different ratios of mixed prototypes interacting with BA or BC sample. In this process, we set the temperature τ of the mixed prototypes metric-based prediction (as in Eq. 8) from {0.01, 0.05, 0.1}. Our model’s average training time with NVIDIA RTX 3090 GPU is about 1.8x faster than that of LfF (Nam et al., 2020). D ADDITIONAL EXPERIMENTS D.1 MORE OBSERVATIONS AND RESULTS IN THE FIRST STAGE In the main text, we have shown the change of posterior over the GT-class and the bias one in Figure 4 with four typical samples of BC samples, intermediate attribute samples, and BA samples. Here we show more observations on the whole training set in a statistical significance. • As shown in Figure 8, the vertical axis of the left two columns figures is the quantity and the horizontal axis is the epoch of model training. Each point on the curve represents how many samples are predicted as GT-class, Bias class or Others by the current epoch model. The first column figures represent the prediction on BA samples, while the second column represents the prediction on BC ones. It can be found that for BC samples, even at the dataset level, the vanilla model always predicts them as Bias class first. It is consistent with our observation in the original paper, in fact, this is another interpretation of the right half of Figure 4 in the paper. • The right two columns of Figure 8 also represent more statistical information at the dataset level, e.g., the third column shows the χ-shaped prediction of BC samples over the whole dataset as training epoch increases. This corresponds to the left half of Figure Figure 4 in the paper. The last column figures shows the change of the loss. It can be found that the loss on the BC sample corresponds to the lower branch of the χ-shaped curve in the paper. Further, we show more BA sample identification results of the first stage over various ratios. In Table 9, we display their top-ratio accuracy, e.g., taking the top ranking with the number of BC samples in the full training set to calculate how many ground truth BC samples they contain. In addition, we also present the average precision in Table 10. Moreover, we plot the PR curves of various methods in the first stage on Colored MNIST and Biased NICO datasets in Figure 9 and Figure 10. The results show that our method maintains excellent performance. D.2 RESULTS WITH ERROR BARS We run our methods and the comparison methods like vanilla and Learning from Failure (LfF) multiple times and report error bars. We present the full results with both 95% confidence interval as Table 13 and the standard deviation in Figure 11. D.3 ABLATION STUDY OF χ-BRANCH METRIC LEARNING OBJECTIVE In order to verify whether the effectiveness of our method is indeed derived from our χ-branch metric learning objective. We first remove one of the mixed prototypes and bias bag losses as “−LCE (pγ , B1−γ )” in Table 8. This substantially lose the metric-based push relationship between the BA samples and the high BC ratio prototypes pγ . Next, we also drop another branch of the prototypes training, i.e., attenuate the effect of most BC samples on a low ratio of mixed prototypes p1−γ . This reduces the debiasing capability using the general properties of the Figure 5 in original paper. The results show that our method with χ-branch objective is significantly better than the single branch at 99.9%, 99.5% and 99.0%. It achieves the same superior level at 95.0%. Especially in the extreme environment, i.e., when the BC samples are rare, the χ-branch can further improve the model performance and overcome the debiasing problem comprehensively. D.4 ROBUSTNESS OF THE χ2-MODEL WITH VARYING BC IDENTIFICATION THRESHOLDS For χ2-model, we use the BC identification thresholds to split D∥ and D⊥. We show the influence of different thresholds in the Table 14, where the vertical axis represents the ground-truth ratio of BA samples included in the dataset. The horizontal axis represents the ratio of BA samples used as hyperparameters in the χ-model. From this result, we can find that the model is less affected by the thresholds. Furthermore, since the Bias Bag {Bγ , B1−γ } is constructed taking into account the presence of IASs. Based on bootstrapped sampling, the BC identification threshold learning is already embedded in the first stage χ-pattern scores. E OVERALL ALGORITHM In Algorithm 1, we show the entire pseudo-code of this work. F DISCUSSION ABOUT THE LIMITATIONS In this paper, we adopt a new two-stage χ2-model. However, the first stage still requires training the long-epoch vanilla model as a weaker bias-capture mechanism. When two attributes have an equal learning difficulty and jointly determine the target label, our method may encounter difficulties. Algorithm 1 Training for χ2-model Require: Biased training data Dtrain = {(xi, yi)}Ni=1. 1: First stage: χ-shape pattern. 2: Train a vanilla model θ on Dtrain with cross entropy loss as mentioned in Equation 1: 3: LCE = E(xi,yi)∼Dtrain [− log Pr (hθ(xi) = yi | xi)] . 4: Consider the T epochs change on ground-truth label yi and bias label bi (xi, hθ): 5: LCE (xi) = ( LgtCE (xi) = { − log Prt (yi | xi) }T t=1 LbCE (xi) = { − log Prt (bi (xi, hθ) | xi) }T t=1 ) . 6: Capture the BC sample with two exponential χ-shape functions: 7: χshape = ( pgt = { e−At }T t=1 pb = { eAt }T t=1 ) . 8: Compute the ranking score s(xi) with the inner product over two curves as Equation 3: 9: s(xi) = ⟨LCE (xi) , χshape⟩ = ⟨LgtCE (xi) , p gt⟩+ ⟨LbCE (xi) , pb⟩ = T∑ t=1 −(e−At) log Pr (hθ (xi) = yi | xi)− (eAt) log Pr (hθ (xi) = bi (xi, hθt) | xi) . 10: Second stage: χ-branch metric learning objective. 11: for each step do 12: Construct multiple bias bags Bγ with bootstrapping as Equation 4: 13: Bγ = { (xi, yi) ∣∣ NUM(D⊥) : NUM (D∥) = γ} , 14: where the ratio of BC samples is γ. 15: Build the prototype p for class c based on Bγ as Equation 5: 16: pγ,c = 1 K ∑ (xi,yi)∈Bγ fϕ (xi) · I [yi = c] . 17: Consider a high γ: 18: for all samples xi ∈ B1−γ do 19: Classify with pγ as Equation 6: 20: Pr (yi | xi) = exp (−d (fϕ (xi) ,pγ,yi) /τ)∑ c∈[C] exp (−d (fϕ (xi) ,pγ,c) /τ) . 21: Compute LCE (pγ , B1−γ ). 22: end for 23: for all samples xi ∈ Bγ do 24: Classify with p1−γ as mentioned before. 25: Compute LCE (p1−γ , Bγ ). 26: end for 27: Compute ∇ϕ LCE (pγ , B1−γ ) + LCE (p1−γ , Bγ ). 28: Update ϕ with ∇ϕ. 29: end for
1. What is the main contribution of the paper regarding bias reduction in datasets and models? 2. What are the strengths and weaknesses of the proposed approach, particularly in its explanation and analysis? 3. Do you have any concerns or confusion regarding the presentation and assumptions made in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper I can summarize the paper only by its structure, not content-wise, since I could not catch the idea at a level such that I could formulate it with my own words or even implement it. The closest idea that I see to the suggested method of the authors is that of stable features (Cui et al.: Stable learning establishes some common ground between causal inference and machine learning, Nature Machine Learning, 2022). The current paper's authors want to identify bias in the dataset by some method to reduce it by somehow debiasing the dataset and model. The idea is to track the dynamics during training. The authors call it: "Based on the diverse learning behaviour of different attribute types, we interpolate bias attributes conditioned on the intrinsic one and compact intra-class samples to remove the effect of bias." How this tracking/interpolation is done remains unclear to me. Strengths And Weaknesses The pros of the paper are the abstract formulation of bias and its explanation/analysis by ColorMNIST. For the reader not familiar with bias in datasets, this is perfect. However, already Fig 2 is no longer clear to me. Is this also just a schematic view of what can happen? Will it happen like this, and how/why? Or is this a plot of a specific learning procedure, and if so, how has it been generated? From this page on, I am completely confused about the presentation of the paper. Specifically, I am not worried about the statement: "the model tends to capture the easier-to-learn attribute in the initial training progress before learning others" and "easier attribute is accidentally the bias one". If the whole paper relies on this assumption, I am very sceptical about the impact. At least, the paper of Penzel et al.: Investigating Neural Network Training on a Feature Level using Conditional Independence, ECCV Workshop on Causality in Vision. 2022. is not in accordance with it. Clarity, Quality, Novelty And Reproducibility The paper is very difficult to follow. Most of the text is narrative and without clear mathematical or algorithmic concretization. In addition, aany terms are introduced for which I got only a vague understanding. What is a Chi-Shape? What is the relation to a Chi^2 distribution? How do we find IAS? The sentence "Then, by interpolating bias attributes conditioned on the intrinsic one, the shift of intrinsic samples is encoded, and we can rectify the biased representations." give no information to me. How are attributes interpolated? What are intrinsic ones? How is the shift encoded? What does rectification mean? How do we record confidence w.r.t. both the target and the most obvious non-target classes along the epochs? What are target classes? What and how can this be obvious?
ICLR
Title Learning Debiased Representations via Conditional Attribute Interpolation Abstract An image is usually described by more than one attribute like “shape” and “color”. When a dataset is biased, i.e., most samples have attributes spuriously correlated with the target label, a Deep Neural Network (DNN) is prone to make predictions by the “unintended” attribute, especially if it is easier to learn. To improve the generalization ability when training on such a biased dataset, we propose a χmodel to learn debiased representations. First, we design a χ-shape pattern to match the training dynamics of a DNN and find Intermediate Attribute Samples (IASs) — samples near the attribute decision boundaries, which indicate how the value of an attribute changes from one extreme to another. Then we rectify the representation with a χ-structured metric learning objective. Conditional interpolation among IASs eliminates the negative effect of peripheral attributes and facilitates retaining the intra-class compactness. Experiments show that χ-model learns debiased representation effectively and achieves remarkable improvements on various datasets. (*This is the modified version with a dark blue mark.) 1 INTRODUCTION Deep neural networks (DNNs) have emerged as an epoch-making technology in various machine learning tasks with impressive performance (LeCun et al., 2015; Bengio et al., 2021). In some real applications, an object may possess multiple attributes, and some of them are only spuriously correlated to the target label. For example, in Figure 1, the intrinsic attribute of an image annotated by “lifeboats” is its shape. Although there are many lifeboats colored orange, a learner can not make predictions through the color, i.e., there is a misleading correlation from attribute as one containing “orange” color is the target “lifeboats”. When the major training samples can be well discerned by such peripheral attribute, especially learning on it is easier than on the intrinsic one, a DNN is prone to bias towards that “unintended” bias attribute (Torralba & Efros, 2011; Khosla et al., 2012; Tommasi et al., 2015; Geirhos et al., 2019; Brendel & Bethge, 2019; Xiao et al., 2021; Singla & Feizi, 2022), like recognizing a “cyclist” wearing orange as a “lifeboat”. Similar spurious attribute also exists in various applications such as recommendation system (Cañamares & Castells, 2018; Morik et al., 2020; Zhang et al., 2021b) and neural language processing (Zhao et al., 2017; He et al., 2019; Selvaraju et al., 2019; Mendelson & Belinkov, 2021; Guo et al., 2022). Given such a biased training dataset, how to get rid of the negative effect of the misleading correlations? One intuitive solution is to perform special operations on those samples highly correlated to the bias attributes, which requires additional supervision, such as the pre-defined bias type (Kim et al., 2019; Wang et al., 2020; Agarwal et al., 2020; Goel et al., 2021; Tartaglione et al., 2021; Geirhos et al., 2019; Bahng et al., 2020; Minderer et al., 2020; Li et al., 2021). Since prior knowledge of the dataset bias requires expensive manual annotations and is naturally missing in some applications, learning a debiased model without additional supervision about bias is in demand. Nam et al. Nam et al. (2020) identify samples with intrinsic attributes based on the observation that malignant bias attributes are often easier-to-learn than others. Then the valuable samples for a debiasing scheme could be dynamically reweighted or augmented (Geirhos et al., 2019; Minderer et al., 2020; Lee et al., 2021). However, the restricted number of such samples implies uncertain representations and limits its ability to assist in debiasing. To leverage more valuable-for-debiasing knowledge, we take a further step in analyzing the representation space of naïvely-training dynamics, especially focusing on the discrepancies in attributes with a different learning difficulty. As we will later illustrate in Figure 2, an attribute-based DNN pushes and fits on the easier bias attribute initially. The intrinsic attribute is then forced to shift in a “lazy” manner. The bias attribute that is pushed away first leaves a large margin boundary. Since the space of the other intrinsic attribute is filled with many different samples on bias attribute, it has a large intra-class variance, like a “hollow”. The representation is biased toward one side of the “hollow”, i.e., those samples aligned with the bias attribute. Without the true intra-class structure, the model becomes biased. 救生艇_0.5703651905059814-串联自行车_0.1379992961883545-玩具店_0.08981618285179138-山地自行车_0.04711327701807022-橄榄球头盔 _0.03929492458701134 47.1% amphibian 28.1% lifeboat 18.1% speedboat 独木舟_0.2981044054031372-快艇_0.19624267518520355-水陆两用车_0.17111073434352875-船桨 _0.13951753079891205-湖边_0.08463700115680695 救生艇_0.9784454107284546-灯塔_0.004665153566747904-集装箱船_0.003798476653173566-码头 _0.003606501966714859-消防船_0.0030068345367908478 (b) Green lifeboat From the above observation, it is crucial to fill the intra-class “hollow” and remodel the representation compactness. Notice that the samples shifting to the two sides of the “hollow” have different characteristics, as aligned with the bias attribute and conflicting with it, respectively. We can find samples with an intermediate attribute state between the above two samples. We call this type of sample the Intermediate Attribute Samples (IASs) which are near the decision boundary. When we condition (fix) on the intrinsic attribute, IASs vary on the other bias attribute and are exactly located in the “hollow” with low-density structural knowledge. Further, we can mine different samples, including IASs, based on the distinct training dynamics. To this end, we propose our two-stage χ2-model. In the first stage, we train a vanilla model on the biased dataset and record the sample-wise training dynamics w.r.t. both the target and the most obvious non-target classes (as the bias ones) along the epochs. An IAS is often predicted as a non-target class in the beginning and then switched to its target class gradually, making its dynamics plot a χ-shape. Following this observation, we design a χ-shape pattern to match the training samples. The matching score ranks the mined samples according to the bias level, i.e., how much they are biased towards the side of the bias attribute. Benefiting from the IASs, we conduct conditional attribute interpolation, i.e., fixing the value of the target attribute. We interpolate the class-specific prototypes around IASs with various bias ratios. These conditional interpolated prototypes precisely “average out” on the bias attribute. From that, we design a χ-structured metric learning objective. It pulls samples close to those same-class interpolated prototypes, then intra-class samples become compact, and the influence of the bias attribute is removed. Our χ2-model learns debiased representation effectively and achieves remarkable improvements on various datasets. Our contributions are summarized as • We claim and verify that Intermediate Attribute Samples (IASs) distributed around attribute decision boundaries facilitate learning a debiased representation. • Based on the diverse learning behavior of different attribute types, we mine samples with varying bias levels, especially IASs. From that, we interpolate bias attribute conditioned on the intrinsic one and compact intra-class samples to remove the negative effect of bias. • Experiments on benchmarks and a newly constructed real-world dataset from NICO (He et al., 2021) validate the effectiveness of our χ2-model in learning debiased representations. 2 A CLOSER LOOK AT LEARNING WITH THE BIAS ATTRIBUTE After the background of learning on a biased dataset, we analyze the training dynamics of the model. 2.1 PROBLEM DEFINITION Given a training set Dtrain = {(xi, yi)}Ni=1, each sample xi is associated with a class label y ∈ {1, 2, · · · , C}. We aim to find a decision rule hθ that maps a sample to its label. hθ is optimized by fitting all the training samples, e.g., minimizing the cross entropy loss as follows: LCE = E(xi,yi)∼Dtrain [− log Pr (hθ(xi) = yi | xi)] . (1) We denote hθ = argmaxc∈[C] w ⊤ c fϕ(x), where fϕ → Rd is the feature extraction network and {wc}c∈[C] is top-layer C-class classifier. The θ represents the union of learnable parameters ϕ and w. We expect the learned hθ to have the high discerning ability over the test set Dtest which has the same form as the training set Dtrain. In addition to its class label, a sample could be described based on various attributes. If an attribute is spuriously correlated with the target label, we name it non-target bias attribute ab. The attribute that intrinsically determines the class label is the target attribute ay. For example, when we draw different handwritten digits in the MNIST dataset with specific colors (Kim et al., 2019), the color attribute will not help in the model generalization since we need to discern digits by the shape, e.g., “1” is like the stick. However, if almost all training images labeled “1” are in the same “yellow” color, the decision rule image in “yellow” is digit “1” will perform well on a such biased training set. In the task of learning with a biased training set (Li & Vasconcelos, 2019; Kim et al., 2019; Nam et al., 2020), the bias attribute ab on most of the same-class samples are consistent, and spuriously correlated with the target label (as example digit “1” in “yellow” above), so a model hθ that relies on ab or the target attribute ay will both perform well on Dtrain. In real-world applications, it is often easier to learn to rely on ab than on ay, such as “background” or “texture” is easier-to-learn than the object (Shah et al., 2020; Xiao et al., 2021). Therefore a model is prone to recognize based on the ab. Such a simplicity bias (Arpit et al., 2017; Palma et al., 2019; Pérez et al., 2019; Shah et al., 2020) dramatically hurts the generalization of an unbiased test set. Nam et al. Nam et al. (2020) also observe that the loss dynamics indicate the easier ab is learned first, where the model is distracted and fails to learn ay . Based on the behaviors of the “ultimate” biased model, samples in Dtrain are split into two sets. Those training samples that could be correctly predicted based on the bias attribute ab are named as Bias-Aligned (BA) samples (as example “yellow digit 1” above), while the remaining ones are Bias-Conflicting (BC) samples (as digit “1” of other colors). The number of BC samples is extremely small, and previous methods emphasize their role with various strategies (Geirhos et al., 2019; Nam et al., 2020; Minderer et al., 2020; Lee et al., 2021). For additional related methods of learning a debiased model (Li & Vasconcelos, 2019; Clark et al., 2019; Sagawa et al., 2020; Cheng et al., 2021; Hendricks et al., 2018; Wang et al., 2019; Cadène et al., 2019; Arjovsky et al., 2019; Zhu et al., 2021; Liu et al., 2021; Kim et al., 2022; Kirichenko et al., 2022) please see the Appendix B. 2.2 THE TRAINING DYNAMICS WHEN LEARNING ON A BIASED TRAINING SET We analyze the training dynamics of a naïvely trained model in Eq. 1 on the Colored MNIST dataset. The non-target bias attribute is the color and the target attribute is the shape. For visualization shown in Figure 2, we set the output dimension of the penultimate layer as two. In addition to the learned classifier on shape attribute ay , simultaneously, we add another linear classifier on top of the embedding to show how the decision boundary of color attribute ab changes. More details are described in the supplementary material. Focusing on the precedence relationship for learning ay and ab, we have the following observations: • The easier-to-learn bias attribute color is fitted soon. The early training stage is shown in the first column. Both color and shape attribute classifiers discern by different colors and do correctly on almost all BA samples (red “0” and blue “2”, about 95% of the training set). • The target attribute shape is learned later in a “lazy” manner. To further fit all shape labels, the model focuses on the limited BC samples (blue “0” and red “2”, correspondingly about 5%) that cannot be perfectly classified by color. It pushes minor BC representations to the other (correct) side instead of adjusting the decision boundary. • The ahead-color and lagged-shape learning process leaves a large margin of color attribute boundary, which further triggers the shape attribute intra-class “hollow”. Because the representation of different colors is continuously pushed away (classified) before that of the shape, the gaps between different color attribute clusters are significantly larger than that of the shape attribute. • Since there is an intra-class “hollow” between BA and BC samples which is conditioned on a particular shape, the true class representation is deviated toward color. The fourth column shows that the training class centers (yellow stars) and the test ones (gray stars) are mismatched. The true class center is located in the low-density “hollow” between shape-conditioned BA and BC samples. Previous observations indicate that the before-and-latter learning process on attributes of different learning difficulties leaves the model to lose intra-class compactness, primarily when learning relying on the bias attribute is easier. To alleviate class center deviation towards the BA samples, only emphasizing the BC samples is insufficient due to their scarcity. In addition, we propose to utilize Intermediate Attribute Samples (IASs), i.e., the samples near the attribute decision boundary and remodel the shifted representation. Especially when conditioned on the target attribute, the IASs vary on bias attribute and fill in the low-density intra-class “hollow” between BA and BC samples. Under review as a conference paper at ICLR 2023 第三个图 (暂定不变, 可以美化) 3 χ2-MODEL To mitigate the representation deviation and compact the intra-class “hollow”, we leverage IASs to encode how the bias attribute changes from one extreme (major BA samples) to another (minor BC samples). Then, the variety of the bias attribute could be interpolated when conditioned on a particular target attribute. We propose our two-stage χ2-model, whose notion is illustrated in Figure 3. First, the χ2-model discovers IASs based on the training dynamics of the vanilla model in subsection 3.1. Next, we analyze where the top-ranked samples with a χ-shape pattern are as well as their effectiveness in debiasing in subsection 3.2. A conditional attribute interpolation step with IASs then fills in the low-density “hollow” to get a better estimate of the class-specific prototypes. By pulling the samples to the corresponding prototype, the χ-structured metric learning makes intra-class samples compact in subsection 3.3. Following subsection 2.2, we investigate the Colored MNIST dataset. Results on other datasets are consistent. 3.1 SCORING SAMPLES WITH A χ-SHAPE PATTERN From the observations in the previous section, we aim to collect IASs to reveal how BC samples shift and leave the intra-class “hollow” between them and BA ones. As discussed in subsection 2.2, the vanilla model fits BC samples later than BA ones, which motivates us to score the samples from their training dynamics. Once we have the score pattern to match and distinguish BA and BC samples, IASs, with intermediate scores can be extracted and available for the next debiasing stage. In the following, we denote the posterior of the Ground-Truth class (GT-class) yi for a sample xi as Pr (hθ(xi) = yi | xi) = softmax(w⊤c fϕ(x))yi , (2) the larger the posterior, the more confident a model predicts xi with yi. For notation simplicity, we abbreviate the posterior as Pr (yi | xi). The target posterior of a BA sample reaches one or becomes much higher than other categories soon after training several epochs, while the posterior of a BC sample has a delayed increase. To sufficiently capture the clues on the change of bias attribute, we also analyze the posterior of the most obvious non-GT attribute, which reveals how the dataset bias influences a sample. Denote the model at the t-th epoch plus the superscript t, such as htθ . we take the bias class for the sample xi at epoch t as bti = argmaxc∈[C],c ̸=yi ( w⊤c fϕ(x) )t . Then, we define the non-GT bias class as the most frequent bti along all epochs, i.e., bi = max_freq{bti}Tt=1. A sample has a larger bias class posterior when it has low confidence in its target class and vice versa. Taking posteriors of both yi and bi into account, a BA sample has larger Pr (yi | xi) and small Pr (bi | xi) along all its training epochs. For a BC sample, Pr (yi | xi) increases gradually and meanwhile Pr (bi | xi) decreases. We verify the phenomenon on Colored MNIST dataset in Figure 4 (left). For BA samples (yellow “1”), the two curves demonstrate a “rectangle”, while for BC samples (blue “1”), the two curves have an obvious intersection and reveal a “χ” shape. The statistics for the change of posteriors are shown in Figure 4 (right). Therefore, how much the training dynamics match the “χ” shape reveals the probability of a sample that shifts from the major BA clusters to minor BC ones. We design a χ-shape for the dynamics of losses to capture such BC-specific properties. The change of sample-specific loss for ground-truth label and bias label over T epochs could be summarized by LCE. Then, we use two exponential χ-shape functions χpattern to capture the ideal loss shape of the BC sample, i.e., the severely shifted case. LCE (xi) = ( LgtCE (xi) = { − log Prt (yi | xi) }T t=1 LbCE (xi) = { − log Prt (bi | xi) }T t=1 ) , χpattern = ( pgt = { e−A1t }T t=1 pb = { eA2t }T t=1 ) , where A1 and A2 are the matching factors. They could be determined based on the dynamics of prediction fluctuations. For more details please see the supplementary material. The χpattern encodes the observations for the most deviated BC samples. To match the loss dynamics with the pattern, we use the inner product over the two curves: s(xi) = ⟨LCE (xi) , χshape⟩ = ⟨LgtCE (xi) ,p gt⟩+ ⟨LbCE (xi) ,pb⟩ (3) = T∑ t=1 −e−A1t · log Pr (hθ (xi) = yi | xi)− eA2t · log Pr (hθ (xi) = bi | xi) . The inner product s(xi) takes the area under the curves (AUC) into account, which is more robust w.r.t. the volatile loss changes. When s(xi) score goes from low to high, the sample varies on the bias level, i.e., from BA samples to IASs, and then to BC samples. 3.2 WHERE IASS ARE AND WHY IASS CAN HELP TO LEARN A DEBIASED MODEL? Combining the analysis in subsection 2.2 and collecting ranked samples by s(xi), we find there are two types of IASs according to the representation near the target attribute decision boundary (as “0” for complex shapes in Figure 3), or that of the bias attribute (as helicopter in intermediate transitional “sunset” background). (1) If an IAS has an intermediate target attribute value, it may be a difficult samples and contains rich information about the target class boundaries. (2) If an IAS is in an intermediate state on the bias attribute, it may help to fill in the intra-class vacant “hollow” when conditioning (fixing) on the target attribute. Both types of IASs are similar to BC samples but from two directions, i.e., compared to the BA samples, they contain richer semantics on target or bias attributes. In the representation space, they are scattered between BA and BC samples, compensating for the sparsity of BC samples and valuable for debiasing. We will show how χ-structured objective with IASs help to remodel the true class centers in the following subsection. We illustrate the importance of IASs with simple experiments on biased Colored-MNIST and Corrupted CIFAR-10 datasets. Details of the datasets are described in subsection 4.1. We investigate whether various reweighting strategies on the vanilla model improve the generalization ability over an unbiased test set. We use “0-1” to denote the strategy that utilizes only the BC samples. “step-wise” means we apply uniformly higher (ratio of BA samples) and lower weights (one minus above ratio) to BC and BA samples. Our “χ-pattern” smoothly reweights all samples with the matched scores, where BC samples as well as IASs have relatively larger weights than the remaining BA ones. The results are listed in Table 1, where we find that simple reweighting strategies easily improve the performance of a vanilla classifier, which verifies the importance of emphasizing BC-like samples. Our “χ-pattern” gets the best results in most scenarios, indicating that higher resampling weights on the IASs and BC samples assist the vanilla model to better frame the representation space. 3.3 LEARNING DEBIASED REPRESENTATION FROM A χ-STRUCTURED OBJECTIVE Although the BA samples are severely biased towards the bias attribute, the BC samples, integrating the rich bias attribute semantics, naturally make the representation independent of the biased influence (Hong & Yang, 2021). An intuitive approach for debiasing is to average over BC samples and Biased Model Unbiased Model classified by the BC class centers. However, the sparsity of BC samples induces an erratic estimation which is far from the true class center, as shown in Figure 2. Benefiting from the analysis that BC-like IASs better estimate the intra-class structure, we target conditional interpolating around it, i.e., mixing the same-class samples with different BC-like scores to remodel the intermediate samples between BA and BC samples. From that, we can construct many prototypes closer to the real class center and pull samples to these prototypes to compact the intra-class space. Combined with the soft ranking score from the χ-pattern in the previous stage, we build two pools (subsets) of the samples denoted as D∥ and D⊥. The D⊥ pool collects the top-rank sampels and most of them are BC samples and IASs. The D∥ pool is sampled from the remaining (BA) part according to the score. With the help of D∥ and D⊥, we construct multiple bias bags (subset) Bγ with bootstrapping where the ratio of BC samples is γ. Bγ = { (xi, yi) ∣∣ num (D⊥) : num (D∥) = γ} , (4) where num (D) equals the number of samples in D. When γ is low to high, the Bγ contains samples ranging from the extremes of BA samples to the IASs, and then to the BC ones. Based on Bγ , we compute the prototype, i.e., averaged on Bγ to interpolate bias attribute conditioned on the particular target attribute. For example, the prototype conditioned on class c is formalized as pγ,c: pγ,c = 1 K ∑ (xi,yi)∈Bγ fϕ (xi) · I [yi = c] . (5) To further demonstrate the significance of intra-class compactness, we design the experiments to study the difference between a biased vanilla model and the unbiased oracle model (well-trained on an unbiased training set). We measure the mean distance between samples and their multiple conditional interpolated prototypes with changing ratio γ. If the prototypes are shifted with changing γ, that indicates a large intra-class deviation exists. As shown in Figure 5, for a biased model, when γ decreases, pγ is interpolated closer to the BA samples. Opposite phenomena are observed in the BC samples. As for the unbiased oracle model, no matter how the BC ratio γ changes, such mean distance is almost unchanged and shows a lower variance. This coincides with the observation in Figure 2. Motivated by mimicking the oracle, we adopt the conditional interpolated prototypes and construct a customized χ-structured metric learning task. Assuming γ is large, we use pγ and p1−γ to denote prototypes in bias bags B with high and low BC ratios. The model is required to be more concerned with pulling the majority of low BC ratio bias bag B1−γ closer to pγ , which interpolated into the high BC space. Similarly, the high BC bias bag Bγ should be pulled to low BC interpolated p1−γ . We optimize the cross-entropy loss LCE to enable the pulling operation. Concretely, the posterior via the distance d (·, ·) in the representation space is formalized as: Pr (yi | xi) = exp (−d (fϕ (xi) ,pγ,yi) /τ)∑ c∈[C] exp (−d (fϕ (xi) ,pγ,c) /τ) , (6) where τ is a scaled temperature. One of the branches of the χ-structure classification task is optimizing the LCE between samples in the B1−γ and pγ . Similarly, the other branch is optimizing between Bγ and p1−γ at the same time. As shown in Figure 3, such a high-and-low correspondence captures and compacts the intra-class “hollow”. In summary, The bias bags of high BC ratios Bγ with corresponding low BC interpolated prototypes conditioning on the target attribute p1−γ , and B1−γ with pγ form the χ-structure crossover objective. 4 EXPERIMENTS We conduct experiments to verify whether χ2-model has effective debiasing capability. We begin by introducing bias details in each dataset (as in subsection 4.1). We present the comparison approaches and training details. In subsection 4.2, the experiments show that χ2-model achieves superior performance in each stage. Furthermore, we experimentally exemplify the inherent quality of the prototype-based classification for the debiasing task and offer the ablation studies in subsection 4.3. 4.1 EXPERIMENTAL SETUPS Table 3: The classification performance on the unbiased CelebA and NICO test set. The data source BA denotes the measurement on BA samples and BC is corresponding the BC samples. Data Biased CelebA NICO Source BA BC All All LfF 73.69 70.41 72.05 34.44 DFA 94.01 58.98 76.50 33.10 χ2-model 97.66 60.79 79.23 36.99 Datasets. To cover more general and challenging cases of bias impact, we validate χ2-model in a variety of datasets, including two synthetic bias datasets (Colored MNIST (Bahng et al., 2020), Corrupted CIFAR-10 (Nam et al., 2020)) and two real-world datasets (Biased CelebA (Liu et al., 2015) and Biased NICO). The BA samples ratio ρ in the training set is usually high (over 95%), so the bias attribute is highly correlated with the target label. For example in the Colored MNIST dataset, each digit is associated with one of the pre-defined bias colors. Similarly, there is an object target with corruption bias in Corrupted CIFAR-10 and a gender target with hair color bias in Biased CelebA. Following the previous works (Hong & Yang, 2021), we use the BA ratio ρ ∈ {95.0%, 99.0%, 99.5%, 99.9%} for Colored MNIST and Corrupted CIFAR-10, respectively, and approximately 96% for Biased CelebA. The Biased NICO dataset is dedicatedly sampled in NICO (He et al., 2021), initially designed for OOD (Out-of-Distribution) image classification. NICO is enriched with variations in the object and context dimensions. We select the bias attribute with the highest co-occurrence frequency to the target one, e.g., helicopter to sunset in training set correlates strongly (see BA samples in Figure 3). The correlation ratio is roughly controlled to 86%. For more details please see the supplementary material. Baselines. We carefully select the classic and the latest trending approaches as baselines: (1) Vanilla model training with cross entropy as described in subsection 2.1. (2) Biastailored approaches with pre-provided bias type: RUBi, Rebias. (3) Explicit approaches under the guidance of total bias supervision: EnD and DI. (4) Implicit methods through general bias properties: LfF and DFA. Implementation details. Following the existing popular benchmarks (Hong & Yang, 2021; Kim et al., 2021), we use the four-layer CNN with kernel size 7 × 7 for the Colored MNIST dataset and ResNet-18 (He et al., 2016) for Corrupted CIFAR-10, Biased CelebA, and Biased NICO datasets. For a fair comparison, we re-implemented the baselines with the same configuration. We mainly focus on unbiased test accuracy for all categories. All models are trained on an NVIDIA RTX 3090 GPU. More details are in the supplementary material. Baselines for the first stage. To better demonstrate the effectiveness of χ-pattern, we consider related sample-specific scoring methods (Pleiss et al., 2020; Zhao et al., 2021) and report average precision, top-threshold accuracy, and the minimum samples (threshold) required for 98% accuracy. For more results, such as PR curves, please see the supplementary material. 4.2 QUANTITATIVE EVALUATION Table 4: The performance of BC samples mining on Colored MNIST with 99.5% BA ratio. Acc. denotes mean accuracy of ranking with top-300. 98%-σ denotes the number of samples required to contain 98% of BC samples. AP is average precision. ↑ means higher is better, while ↓ is the opposite. Measure Acc. ↑ 98%-σ ↓ AP ↑ Entropy(Joshi et al., 2009) 78.33 632 83.52 Confidence(Li & Sethi, 2006) 80.33 590 85.61 Loss(Nam et al., 2020) 94.39 418 98.22 Pleiss et al. (2020) 82.67 686 89.24 Zhao et al. (2021) 90.33 451 96.04 χ-pattern 95.84 372 98.44 Performance of χ-shape pattern. As shown in Table 4, our χ-pattern matching achieves state-of-the-art performance on various evaluation metrics. Thus, the χ-structure metric learning objective can leverage more IASs cues to interpolate bias attribute and further learn the debiased representation. χ2-model in different types of bias constructions. (1) Synthetic bias on Colored MNIST and Corrupted CIFAR-10: From Table 2 we find that under extreme bias influence, as ρ is 99.9%, the performance of the vanilla model and other baselines decreases catastrophically. In contrast, Our χ2-model maintains the robust and efficient debiasing capability on the unbiased dataset. Further, more results in Figure 2 present the remarkable performance of our χ2-model compared to other methods. (2) Real-world bias on Biased CelebA and Biased NICO: Table 3 shows that compared to the recent methods which do not pre-provide any bias information in advance as the same as ours, our method also achieves a remarkable performance. The above experiments indicate that conditional interpolation among IASs feedback the shift of the intrinsic knowledge and facilitate learning debiased representations even in extremely biased conditions. 4.3 FURTHER ANALYSIS The inherent debiasing capability of prototype-based classification. We directly construct the prototype by averaging the trained representations of the vanilla model (as in Table 2 line two named “+ p”). The results show that on some datasets like Colored MNIST, the prototype-based classifier without training achieves performance improvement. Visualize the test set representation on 2D embedding space via t-SNE. Figure 6 shows the 2D projection of the feature extracted by χ2-model on Colored MNIST. We color the target and bias attributes separately. The representations follow the target attribute to cluster into classes which indicates that our model learns the debiased representations. Ablation studies. We further perform the ablation analysis of the matching factors A1, A2 in Eq. ??, which directly determine the χ-shape curves. The results show the first stage of χ2-model is robust to changes in hyperparameters. For more related experiments like on different BC identification thresholds, please see the supplementary material. 5 CONCLUSION Although intra-class biased samples with a “hollow” structure impede learning debiased representations, we propose the χ2-model to leverage Intermediate Attribute Samples (IASs) to capture how the samples with intrinsic attribute shift. χ2-model works in a two-stage manner, matching and ranking possible IASs based on their χ-shape training dynamics followed by a χ-branch metric-based debiasing objective with conditional attribute interpolation. Appendix • Appendix A: An example of the color-biased model. • Appendix B: Additional related work (cf. subsection 2.1). • Appendix C: Implementation details and hyper-parameter settings (cf. subsection 4.1). • Appendix D Additional experiments, ablation studies, and robustness analysis. • Appendix E Overall algorithm. • Appendix F Discussion about the limitations. A AN EXAMPLE OF THE COLOR-BIASED MODEL ON orange LIFEBOAT 救生艇_0.5703651905059814-串联自行车_0.1379992961883545-玩具店_0.08981618285179138-山地自行车_0.04711327701807022-橄榄球头盔 _0.03929492458701134 独木舟_0.2981044054031372-快艇_0.19624267518520355-水陆两用车_0.17111073434352875-船桨 _0.13951753079891205-湖边_0.08463700115680695 救生艇_0.9784454107284546-灯塔_0.004665153566747904-集装箱船_0.003798476653173566-码头 _0.003606501966714859-消防船_0.0030068345367908478 B RELATED WORK FROM WHAT BIAS INFORMATION PROVIDED IN ADVANCE There are various methods of learning a debiased model from a biased training set. Debiasing under the guidance of bias supervision. This thread of methods introduces full explicit bias attribute supervision and an additional branch of the model to predict the label of the bias. Kim et al. (2019) leverage bias clues to minimize the mutual information between the representation and the bias attributes with gradient reversal layers (Ganin et al., 2016). Similarly, Li & Vasconcelos (2019) perform RGB vector as color side information to conduct the minimax bias mitigation. (Clark et al., 2019; Wang et al., 2020) utilize the auxiliary bias instruction to train the relevant independent models and ensemble their predictions. (Sagawa et al., 2020; Goel et al., 2021) balance the performance of bias subgroups over distribution shift. (Tartaglione et al., 2021; Cheng et al., 2021) directly regularize the bias attribute to disentangle the confused bias representations. Debiasing with bias prior knowledge. Many real-world applications limit access to sufficient bias supervision. However, a relaxed condition could be met to provide prior knowledge of the bias (e.g., bias type). Many methods highlight that the content bias type plays an important role in CNN object recognition (Hendricks et al., 2018; Geirhos et al., 2019; Li et al., 2021). Based on such observations, several approaches adopt the bias type to build a bias-capturing module. Wang et al. (2019) remove texture bias through latent space projection with gray-level co-occurrence matrix (Lam, 1996). Bahng et al. (2020) encourage the debiased model to learn independent representation from a designed biased one. Other approaches mitigate the dataset bias existing in natural language processing with logits re-weighting (Cadène et al., 2019). Debiasing through general intrinsic bias properties. Towards more practical applications, this line of methods takes full advantage of the bias property, which does not require either explicit bias supervision or pre-defined bias prior knowledge. Nam et al. (2020) make a comprehensive analysis on the properties of bias. The observations indicate a two-branch training strategy — a biased model trained with Generalized Cross-Entropy loss (Zhang & Sabuncu, 2018) amplifying its “prejudice” on BA samples, and a debiased model focuses more on samples that go against the prejudice of the biased one. Similarly, Lee et al. (2021) fit one of the encoders to the bias attribute and randomly swap the latent features to work as augmented BC samples. Other approaches also consider the model learning shortcuts revealed by the high gradients of latent vectors (Darlow et al., 2020; Huang et al., 2020). C IMPLEMENTATION DETAILS C.1 TRAINING DYNAMIC VISUALIZATION OF FIGURE 2 To visualize the 2D attribute boundary, we first add an extra linear projection layer wproj ∈ Rd×2 behind the feature extraction network and correspondingly modify the top-layer classifier wc to classify on 2D features. After training is completed, we directly present the 2D features of the data and the top-layer classifier in Figure 2. Secondly, to compare different attributes and feedback on their gradients fairly, we jointly train the attribute classifier with a shared feature extraction network. This ensures their features are consistent and comparable to the classifiers with different attributes. Figure 2 shows the results of the above model trained on Colored MNIST with a BA ratio of 0.95, where the learning rate is 0.00001. The two digit (shape) classes in the figure are 2 and 8. Correspondingly, the two color classes are purple and green. The samples 2 in purple and the 8 in green are BA ones. In contrast, the samples 2 in green and the 8 in purple are BC ones. The ratio of BA and BC samples is roughly 0.95. C.2 DATASETS Colored MNIST. Following most of the previous work (Nam et al., 2020; Hong & Yang, 2021; Lee et al., 2021), we construct the Colored MNIST by coloring each digit and keeping the background black, in other words, every target attribute digit in the Colored MNIST is highly correlated with a specific bias attribute color. The degree of severity we chose to calibrate the dataset bias difficulty was 1 as in previous works. The different bias-aligned (BA) ratios contains different BA samples, e.g., in the ratio of 99.9% we have 59940 BA samples and 60 bias-conflicting (BC) samples in the training set. Similarly, the ratio of 99.5% has {59940, 60} BA and BC samples, correspondingly. In the same way, for other ratios of BA and BC samples, the ratio of 99.0% is {58, 402; 598} and the ratio of 95.0% is {57, 000; 3000}. Corrupted CIFAR-10. For the Corrupted CIFAR dataset, we follow the earlier work (Lee et al., 2021) and choose 10 corruption types, i.e., {Snow, Frost, Fog, Brightness, Contrast, Spatter, Elastic, JPEG, Pixelate, Saturate}. The corruption type is highly correlated with the target ones as PLANE, CAR, BIRD, CAT, DEER, DOG, FROG, HORSE, SHIP, and TRUCK. Similarly, we choose severity 1 in the original paper (Nam et al., 2020). The number of BA samples and BC samples for each ratio of BA ones are: 99.9%-{49, 950; 50}, 99.5%-{49, 750; 250}, 99.0%-{49, 500; 500}, 95.0%- {47500; 2, 500}. Biased CelebA. Following the experimental configuration of previous works, We intentionally truncated a portion of the CelebA dataset so that each target attribute containing BlondHair or not was skewed towards the bias attribute of Male. The number of target bias, i.e., BlondHair-Male is as follows: BC samples like BlondHair equals 0 with Male equals 0 contains 1, 558 and {1 -1 : 1, 098}. The BA samples is {1 -0 : 18, 279} and {0 -1 : 53, 577}. Biased NICO. The Biased NICO dataset is dedicatedly sampled in NICO (He et al., 2021), which is originally designed for Non-I.I.D. or OOD (Out-of-Distribution) image classification. NICO is enriched with variations in the object and context dimensions. Concretely, there are two superclasses: Animal and Vehicle: with 10 classes as BEAR, BIRD, CAT, COW, DOG, ELEPHANT, HORSE, MONKEY, RAT and SHEEP for Animal, and 9 classes as AIRPLANE, BICYCLE, BOAT, BUS, CAR, HELICOPTER, MOTORCYCLE, TRAIN and TRUCK for Vehicle. Each object class has 9 or 10 contexts. We select the bias attribute with the highest co-occurrence frequency with the target one, i.e., DOG on snow, BIRD on grass, CAT eating, BOAT on beach, BEAR in forest, HELICOPTER in sunset, BUS in city, COW lying, ELEPHANT in river, MOTORCYCLE in street, MONKEY in water, TRUCK on road, RAT at home, BICYCLE with people, AIRPLANE aside mountain, SHEEP walking, HORSE running, CAR on track, TRAIN at station. The quantitative details of each class are shown in Table 5. Similarly, The details divided by bias attribute are shown in Table 6, The remaining bias attributes that do not appear in the BA samples are: {at wharf, at airport, aside traffic light, eating grass, white, in cage, in hole, in garage, cross bridge, at park, yacht, flying, aside tree, black, standing, sitting, at night, double decker, on sea, around cloud, with pilot, in sunrise, in hand, on booth, aside people, at sunset, brown, on shoulder, spotted, subway, in race, climbing, cross tunnel, velodrome, on bridge, shared, at yard, in circus, on ground, on tree, at heliport, taking off, on branch, wooden, sailboat, in zoo}, which are few in number, about 4 of each. In the test set they are balanced with the remaining bias attributes. The training set’s total correlation ratio is roughly 86.27%. C.3 DATA PRE-PROCESSING The image sizes of Colored MNIST and Corrupted CIFAR-10 are 28× 28 and 32× 32, respectively. We feed the original images into the model and do not use data augmentation transformations during training and testing. We directly normalize the data from Colored MNIST and Corrupted CIFAR-10 by the mean of (0.5, 0.5, 0.5) and the standard deviation of (0.5, 0.5, 0.5). In the real-world datasets, for the training phase of Biased CelebA, we first resize the images to a size of 224× 224, and then apply the RandomHorizontalFlip transformation. As for the Biased NICO dataset, following most of the previous works (Zhang et al., 2021a), we append the RandomHorizontalFlip, ColorJitter, RandomGrayscale transformations after the RandomResizedCrop to 224× 224. For both of them, during the test, we only resize the images. We normalize these real-world datasets by the mean of (0.485, 0.456, 0.406) and the standard deviation of (0.229, 0.224, 0.225). C.4 TRAINING DETAILS Our code is based on the PyTorch library. Following the previous work (Hong & Yang, 2021), We use the four-layer convolutional neural network with kernel size 7× 7 for the Colored MNIST dataset and ResNet-18 (He et al., 2016) for Corrupted CIFAR-10, Biased CelebA, Biased NICO datasets. For all methods and datasets, we do not consider loading any additional pretrained weights to allow the models represent the pure debiasing capability. In the training phase, we use Adam optimizer and cosine annealing learning rate scheduler. For all datasets, the batch size is selected from {64, 128, 256}. Correspondingly, the learning rate is from {0.0001, 0.0005, 0.001, 0.005}, and the smaller ones are used for training the vanilla model. For all methods, including the reproduced comparison ones, we train the model for 200 epochs on Colored MNIST, Corrupted CIFAR-10, while training 50 and 100 epochs on Biased CelebA and Biased NICO, respectively. χ2-model. • For the first stage: We train the 1000 epochs vanilla model with a learning rate 1e-5 on Colored MNIST, 5e-3 on Corrupted CIFAR-10, 5e-5 on Biased CelebA and 1e-3 on Biased NICO to extract the training dynamics. In practice, we design the Area Under Score (AUS) strategy to capture the training dynamics. All comparison methods leverage epoch-specific scores, and AUS applies to these methods, e.g., Loss is the calculated all the epoch-level loss summations. We generally use the ratio of divided BC samples as a hyperparameter. We find that a slightly larger BC ratio brings better results in our experiments, as detailed in Table 8. In addition, for the IASs importance verification experiments in Table 1, the “step-wise” setting indicates we apply uniformly higher and lower sampling weights to BC and BA samples. The unified weights are related to the BA ratio ρ in the whole dataset, i.e., the weight on BC samples is ρ and on the BA ones is 1− ρ. • In the second stage: χ-branch metric learning objective, we first construct the data pools D∥ and D⊥ with the ranking. The BC identification threshold to split those two data pools can be adjusted to a suitable value without knowing the ground-truth dataset BC ratio. To observe the IASs validity and unify the style, we report the results one level higher in {0.999, 0.995, 0.99, 0.95} than the dataset BC ratio in the main text, i.e., the threshold is 0.99 if the dataset BC ratio is 0.995. See more details in subsection D.4 and Table 14. We construct different ratios of bias bags {Bγ , B1−γ } and mixed prototypes {pγ , p1−γ } by bootstrapped sampling a batch containing almost the same number of BA and BC samples using the first stage χ-pattern score (described in subsection 3.3). The more numerous part is the one that contains all the samples in that part of the batch, e.g., for a large γ with a majority of the BC part, Bγ contains all the BC samples in the above batch. In this case, the remaining 1 − γ ratio of BA samples are sampled uniformly in the batch. The mixed prototype pγ and p1−γ are extracted and constructed similarly. The mixed ratios γ are from {0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}. As described in the paper, the computation of LCE (pγ , B1−γ ) and LCE (p1−γ , Bγ ) will yield different ratios of mixed prototypes interacting with BA or BC sample. In this process, we set the temperature τ of the mixed prototypes metric-based prediction (as in Eq. 8) from {0.01, 0.05, 0.1}. Our model’s average training time with NVIDIA RTX 3090 GPU is about 1.8x faster than that of LfF (Nam et al., 2020). D ADDITIONAL EXPERIMENTS D.1 MORE OBSERVATIONS AND RESULTS IN THE FIRST STAGE In the main text, we have shown the change of posterior over the GT-class and the bias one in Figure 4 with four typical samples of BC samples, intermediate attribute samples, and BA samples. Here we show more observations on the whole training set in a statistical significance. • As shown in Figure 8, the vertical axis of the left two columns figures is the quantity and the horizontal axis is the epoch of model training. Each point on the curve represents how many samples are predicted as GT-class, Bias class or Others by the current epoch model. The first column figures represent the prediction on BA samples, while the second column represents the prediction on BC ones. It can be found that for BC samples, even at the dataset level, the vanilla model always predicts them as Bias class first. It is consistent with our observation in the original paper, in fact, this is another interpretation of the right half of Figure 4 in the paper. • The right two columns of Figure 8 also represent more statistical information at the dataset level, e.g., the third column shows the χ-shaped prediction of BC samples over the whole dataset as training epoch increases. This corresponds to the left half of Figure Figure 4 in the paper. The last column figures shows the change of the loss. It can be found that the loss on the BC sample corresponds to the lower branch of the χ-shaped curve in the paper. Further, we show more BA sample identification results of the first stage over various ratios. In Table 9, we display their top-ratio accuracy, e.g., taking the top ranking with the number of BC samples in the full training set to calculate how many ground truth BC samples they contain. In addition, we also present the average precision in Table 10. Moreover, we plot the PR curves of various methods in the first stage on Colored MNIST and Biased NICO datasets in Figure 9 and Figure 10. The results show that our method maintains excellent performance. D.2 RESULTS WITH ERROR BARS We run our methods and the comparison methods like vanilla and Learning from Failure (LfF) multiple times and report error bars. We present the full results with both 95% confidence interval as Table 13 and the standard deviation in Figure 11. D.3 ABLATION STUDY OF χ-BRANCH METRIC LEARNING OBJECTIVE In order to verify whether the effectiveness of our method is indeed derived from our χ-branch metric learning objective. We first remove one of the mixed prototypes and bias bag losses as “−LCE (pγ , B1−γ )” in Table 8. This substantially lose the metric-based push relationship between the BA samples and the high BC ratio prototypes pγ . Next, we also drop another branch of the prototypes training, i.e., attenuate the effect of most BC samples on a low ratio of mixed prototypes p1−γ . This reduces the debiasing capability using the general properties of the Figure 5 in original paper. The results show that our method with χ-branch objective is significantly better than the single branch at 99.9%, 99.5% and 99.0%. It achieves the same superior level at 95.0%. Especially in the extreme environment, i.e., when the BC samples are rare, the χ-branch can further improve the model performance and overcome the debiasing problem comprehensively. D.4 ROBUSTNESS OF THE χ2-MODEL WITH VARYING BC IDENTIFICATION THRESHOLDS For χ2-model, we use the BC identification thresholds to split D∥ and D⊥. We show the influence of different thresholds in the Table 14, where the vertical axis represents the ground-truth ratio of BA samples included in the dataset. The horizontal axis represents the ratio of BA samples used as hyperparameters in the χ-model. From this result, we can find that the model is less affected by the thresholds. Furthermore, since the Bias Bag {Bγ , B1−γ } is constructed taking into account the presence of IASs. Based on bootstrapped sampling, the BC identification threshold learning is already embedded in the first stage χ-pattern scores. E OVERALL ALGORITHM In Algorithm 1, we show the entire pseudo-code of this work. F DISCUSSION ABOUT THE LIMITATIONS In this paper, we adopt a new two-stage χ2-model. However, the first stage still requires training the long-epoch vanilla model as a weaker bias-capture mechanism. When two attributes have an equal learning difficulty and jointly determine the target label, our method may encounter difficulties. Algorithm 1 Training for χ2-model Require: Biased training data Dtrain = {(xi, yi)}Ni=1. 1: First stage: χ-shape pattern. 2: Train a vanilla model θ on Dtrain with cross entropy loss as mentioned in Equation 1: 3: LCE = E(xi,yi)∼Dtrain [− log Pr (hθ(xi) = yi | xi)] . 4: Consider the T epochs change on ground-truth label yi and bias label bi (xi, hθ): 5: LCE (xi) = ( LgtCE (xi) = { − log Prt (yi | xi) }T t=1 LbCE (xi) = { − log Prt (bi (xi, hθ) | xi) }T t=1 ) . 6: Capture the BC sample with two exponential χ-shape functions: 7: χshape = ( pgt = { e−At }T t=1 pb = { eAt }T t=1 ) . 8: Compute the ranking score s(xi) with the inner product over two curves as Equation 3: 9: s(xi) = ⟨LCE (xi) , χshape⟩ = ⟨LgtCE (xi) , p gt⟩+ ⟨LbCE (xi) , pb⟩ = T∑ t=1 −(e−At) log Pr (hθ (xi) = yi | xi)− (eAt) log Pr (hθ (xi) = bi (xi, hθt) | xi) . 10: Second stage: χ-branch metric learning objective. 11: for each step do 12: Construct multiple bias bags Bγ with bootstrapping as Equation 4: 13: Bγ = { (xi, yi) ∣∣ NUM(D⊥) : NUM (D∥) = γ} , 14: where the ratio of BC samples is γ. 15: Build the prototype p for class c based on Bγ as Equation 5: 16: pγ,c = 1 K ∑ (xi,yi)∈Bγ fϕ (xi) · I [yi = c] . 17: Consider a high γ: 18: for all samples xi ∈ B1−γ do 19: Classify with pγ as Equation 6: 20: Pr (yi | xi) = exp (−d (fϕ (xi) ,pγ,yi) /τ)∑ c∈[C] exp (−d (fϕ (xi) ,pγ,c) /τ) . 21: Compute LCE (pγ , B1−γ ). 22: end for 23: for all samples xi ∈ Bγ do 24: Classify with p1−γ as mentioned before. 25: Compute LCE (p1−γ , Bγ ). 26: end for 27: Compute ∇ϕ LCE (pγ , B1−γ ) + LCE (p1−γ , Bγ ). 28: Update ϕ with ∇ϕ. 29: end for
1. What is the primary objective of the paper regarding spurious correlation? 2. What is the proposed method for debiasing, and how does it capture training dynamics? 3. What are the strengths and weaknesses of the paper, particularly in its assumptions and experiments? 4. How would you assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper aims at discerning and alleviating the effect of spurious correlation without specific supervison during the model learning process. The authors proposed to learn the debiased representation by capturing the training dynamics of a deep model using a X2-model. Specifically, it is employed to find the Intermediate Attribute Samples that near the decision boundaries, which can be easily rectified with X-branch metric learning. The proposed method has been evaluated on several benchmarks and demonstrated its superiority in learning debiased representations. Strengths And Weaknesses Strength: The paper is well motivated and clearly written. The proposed method is interesting and proven to be effective The involved experiments are comprehensive. Weaknesses: The major weakness should be the underlying assumption: the learning difficulty of bias and non-bias is distinct. While this may hold true in many scenarios, but is not always the case in real-world applications. Clarity, Quality, Novelty And Reproducibility A paper with proper motivation and originality, of good quality.
ICLR
Title Learning Debiased Representations via Conditional Attribute Interpolation Abstract An image is usually described by more than one attribute like “shape” and “color”. When a dataset is biased, i.e., most samples have attributes spuriously correlated with the target label, a Deep Neural Network (DNN) is prone to make predictions by the “unintended” attribute, especially if it is easier to learn. To improve the generalization ability when training on such a biased dataset, we propose a χmodel to learn debiased representations. First, we design a χ-shape pattern to match the training dynamics of a DNN and find Intermediate Attribute Samples (IASs) — samples near the attribute decision boundaries, which indicate how the value of an attribute changes from one extreme to another. Then we rectify the representation with a χ-structured metric learning objective. Conditional interpolation among IASs eliminates the negative effect of peripheral attributes and facilitates retaining the intra-class compactness. Experiments show that χ-model learns debiased representation effectively and achieves remarkable improvements on various datasets. (*This is the modified version with a dark blue mark.) 1 INTRODUCTION Deep neural networks (DNNs) have emerged as an epoch-making technology in various machine learning tasks with impressive performance (LeCun et al., 2015; Bengio et al., 2021). In some real applications, an object may possess multiple attributes, and some of them are only spuriously correlated to the target label. For example, in Figure 1, the intrinsic attribute of an image annotated by “lifeboats” is its shape. Although there are many lifeboats colored orange, a learner can not make predictions through the color, i.e., there is a misleading correlation from attribute as one containing “orange” color is the target “lifeboats”. When the major training samples can be well discerned by such peripheral attribute, especially learning on it is easier than on the intrinsic one, a DNN is prone to bias towards that “unintended” bias attribute (Torralba & Efros, 2011; Khosla et al., 2012; Tommasi et al., 2015; Geirhos et al., 2019; Brendel & Bethge, 2019; Xiao et al., 2021; Singla & Feizi, 2022), like recognizing a “cyclist” wearing orange as a “lifeboat”. Similar spurious attribute also exists in various applications such as recommendation system (Cañamares & Castells, 2018; Morik et al., 2020; Zhang et al., 2021b) and neural language processing (Zhao et al., 2017; He et al., 2019; Selvaraju et al., 2019; Mendelson & Belinkov, 2021; Guo et al., 2022). Given such a biased training dataset, how to get rid of the negative effect of the misleading correlations? One intuitive solution is to perform special operations on those samples highly correlated to the bias attributes, which requires additional supervision, such as the pre-defined bias type (Kim et al., 2019; Wang et al., 2020; Agarwal et al., 2020; Goel et al., 2021; Tartaglione et al., 2021; Geirhos et al., 2019; Bahng et al., 2020; Minderer et al., 2020; Li et al., 2021). Since prior knowledge of the dataset bias requires expensive manual annotations and is naturally missing in some applications, learning a debiased model without additional supervision about bias is in demand. Nam et al. Nam et al. (2020) identify samples with intrinsic attributes based on the observation that malignant bias attributes are often easier-to-learn than others. Then the valuable samples for a debiasing scheme could be dynamically reweighted or augmented (Geirhos et al., 2019; Minderer et al., 2020; Lee et al., 2021). However, the restricted number of such samples implies uncertain representations and limits its ability to assist in debiasing. To leverage more valuable-for-debiasing knowledge, we take a further step in analyzing the representation space of naïvely-training dynamics, especially focusing on the discrepancies in attributes with a different learning difficulty. As we will later illustrate in Figure 2, an attribute-based DNN pushes and fits on the easier bias attribute initially. The intrinsic attribute is then forced to shift in a “lazy” manner. The bias attribute that is pushed away first leaves a large margin boundary. Since the space of the other intrinsic attribute is filled with many different samples on bias attribute, it has a large intra-class variance, like a “hollow”. The representation is biased toward one side of the “hollow”, i.e., those samples aligned with the bias attribute. Without the true intra-class structure, the model becomes biased. 救生艇_0.5703651905059814-串联自行车_0.1379992961883545-玩具店_0.08981618285179138-山地自行车_0.04711327701807022-橄榄球头盔 _0.03929492458701134 47.1% amphibian 28.1% lifeboat 18.1% speedboat 独木舟_0.2981044054031372-快艇_0.19624267518520355-水陆两用车_0.17111073434352875-船桨 _0.13951753079891205-湖边_0.08463700115680695 救生艇_0.9784454107284546-灯塔_0.004665153566747904-集装箱船_0.003798476653173566-码头 _0.003606501966714859-消防船_0.0030068345367908478 (b) Green lifeboat From the above observation, it is crucial to fill the intra-class “hollow” and remodel the representation compactness. Notice that the samples shifting to the two sides of the “hollow” have different characteristics, as aligned with the bias attribute and conflicting with it, respectively. We can find samples with an intermediate attribute state between the above two samples. We call this type of sample the Intermediate Attribute Samples (IASs) which are near the decision boundary. When we condition (fix) on the intrinsic attribute, IASs vary on the other bias attribute and are exactly located in the “hollow” with low-density structural knowledge. Further, we can mine different samples, including IASs, based on the distinct training dynamics. To this end, we propose our two-stage χ2-model. In the first stage, we train a vanilla model on the biased dataset and record the sample-wise training dynamics w.r.t. both the target and the most obvious non-target classes (as the bias ones) along the epochs. An IAS is often predicted as a non-target class in the beginning and then switched to its target class gradually, making its dynamics plot a χ-shape. Following this observation, we design a χ-shape pattern to match the training samples. The matching score ranks the mined samples according to the bias level, i.e., how much they are biased towards the side of the bias attribute. Benefiting from the IASs, we conduct conditional attribute interpolation, i.e., fixing the value of the target attribute. We interpolate the class-specific prototypes around IASs with various bias ratios. These conditional interpolated prototypes precisely “average out” on the bias attribute. From that, we design a χ-structured metric learning objective. It pulls samples close to those same-class interpolated prototypes, then intra-class samples become compact, and the influence of the bias attribute is removed. Our χ2-model learns debiased representation effectively and achieves remarkable improvements on various datasets. Our contributions are summarized as • We claim and verify that Intermediate Attribute Samples (IASs) distributed around attribute decision boundaries facilitate learning a debiased representation. • Based on the diverse learning behavior of different attribute types, we mine samples with varying bias levels, especially IASs. From that, we interpolate bias attribute conditioned on the intrinsic one and compact intra-class samples to remove the negative effect of bias. • Experiments on benchmarks and a newly constructed real-world dataset from NICO (He et al., 2021) validate the effectiveness of our χ2-model in learning debiased representations. 2 A CLOSER LOOK AT LEARNING WITH THE BIAS ATTRIBUTE After the background of learning on a biased dataset, we analyze the training dynamics of the model. 2.1 PROBLEM DEFINITION Given a training set Dtrain = {(xi, yi)}Ni=1, each sample xi is associated with a class label y ∈ {1, 2, · · · , C}. We aim to find a decision rule hθ that maps a sample to its label. hθ is optimized by fitting all the training samples, e.g., minimizing the cross entropy loss as follows: LCE = E(xi,yi)∼Dtrain [− log Pr (hθ(xi) = yi | xi)] . (1) We denote hθ = argmaxc∈[C] w ⊤ c fϕ(x), where fϕ → Rd is the feature extraction network and {wc}c∈[C] is top-layer C-class classifier. The θ represents the union of learnable parameters ϕ and w. We expect the learned hθ to have the high discerning ability over the test set Dtest which has the same form as the training set Dtrain. In addition to its class label, a sample could be described based on various attributes. If an attribute is spuriously correlated with the target label, we name it non-target bias attribute ab. The attribute that intrinsically determines the class label is the target attribute ay. For example, when we draw different handwritten digits in the MNIST dataset with specific colors (Kim et al., 2019), the color attribute will not help in the model generalization since we need to discern digits by the shape, e.g., “1” is like the stick. However, if almost all training images labeled “1” are in the same “yellow” color, the decision rule image in “yellow” is digit “1” will perform well on a such biased training set. In the task of learning with a biased training set (Li & Vasconcelos, 2019; Kim et al., 2019; Nam et al., 2020), the bias attribute ab on most of the same-class samples are consistent, and spuriously correlated with the target label (as example digit “1” in “yellow” above), so a model hθ that relies on ab or the target attribute ay will both perform well on Dtrain. In real-world applications, it is often easier to learn to rely on ab than on ay, such as “background” or “texture” is easier-to-learn than the object (Shah et al., 2020; Xiao et al., 2021). Therefore a model is prone to recognize based on the ab. Such a simplicity bias (Arpit et al., 2017; Palma et al., 2019; Pérez et al., 2019; Shah et al., 2020) dramatically hurts the generalization of an unbiased test set. Nam et al. Nam et al. (2020) also observe that the loss dynamics indicate the easier ab is learned first, where the model is distracted and fails to learn ay . Based on the behaviors of the “ultimate” biased model, samples in Dtrain are split into two sets. Those training samples that could be correctly predicted based on the bias attribute ab are named as Bias-Aligned (BA) samples (as example “yellow digit 1” above), while the remaining ones are Bias-Conflicting (BC) samples (as digit “1” of other colors). The number of BC samples is extremely small, and previous methods emphasize their role with various strategies (Geirhos et al., 2019; Nam et al., 2020; Minderer et al., 2020; Lee et al., 2021). For additional related methods of learning a debiased model (Li & Vasconcelos, 2019; Clark et al., 2019; Sagawa et al., 2020; Cheng et al., 2021; Hendricks et al., 2018; Wang et al., 2019; Cadène et al., 2019; Arjovsky et al., 2019; Zhu et al., 2021; Liu et al., 2021; Kim et al., 2022; Kirichenko et al., 2022) please see the Appendix B. 2.2 THE TRAINING DYNAMICS WHEN LEARNING ON A BIASED TRAINING SET We analyze the training dynamics of a naïvely trained model in Eq. 1 on the Colored MNIST dataset. The non-target bias attribute is the color and the target attribute is the shape. For visualization shown in Figure 2, we set the output dimension of the penultimate layer as two. In addition to the learned classifier on shape attribute ay , simultaneously, we add another linear classifier on top of the embedding to show how the decision boundary of color attribute ab changes. More details are described in the supplementary material. Focusing on the precedence relationship for learning ay and ab, we have the following observations: • The easier-to-learn bias attribute color is fitted soon. The early training stage is shown in the first column. Both color and shape attribute classifiers discern by different colors and do correctly on almost all BA samples (red “0” and blue “2”, about 95% of the training set). • The target attribute shape is learned later in a “lazy” manner. To further fit all shape labels, the model focuses on the limited BC samples (blue “0” and red “2”, correspondingly about 5%) that cannot be perfectly classified by color. It pushes minor BC representations to the other (correct) side instead of adjusting the decision boundary. • The ahead-color and lagged-shape learning process leaves a large margin of color attribute boundary, which further triggers the shape attribute intra-class “hollow”. Because the representation of different colors is continuously pushed away (classified) before that of the shape, the gaps between different color attribute clusters are significantly larger than that of the shape attribute. • Since there is an intra-class “hollow” between BA and BC samples which is conditioned on a particular shape, the true class representation is deviated toward color. The fourth column shows that the training class centers (yellow stars) and the test ones (gray stars) are mismatched. The true class center is located in the low-density “hollow” between shape-conditioned BA and BC samples. Previous observations indicate that the before-and-latter learning process on attributes of different learning difficulties leaves the model to lose intra-class compactness, primarily when learning relying on the bias attribute is easier. To alleviate class center deviation towards the BA samples, only emphasizing the BC samples is insufficient due to their scarcity. In addition, we propose to utilize Intermediate Attribute Samples (IASs), i.e., the samples near the attribute decision boundary and remodel the shifted representation. Especially when conditioned on the target attribute, the IASs vary on bias attribute and fill in the low-density intra-class “hollow” between BA and BC samples. Under review as a conference paper at ICLR 2023 第三个图 (暂定不变, 可以美化) 3 χ2-MODEL To mitigate the representation deviation and compact the intra-class “hollow”, we leverage IASs to encode how the bias attribute changes from one extreme (major BA samples) to another (minor BC samples). Then, the variety of the bias attribute could be interpolated when conditioned on a particular target attribute. We propose our two-stage χ2-model, whose notion is illustrated in Figure 3. First, the χ2-model discovers IASs based on the training dynamics of the vanilla model in subsection 3.1. Next, we analyze where the top-ranked samples with a χ-shape pattern are as well as their effectiveness in debiasing in subsection 3.2. A conditional attribute interpolation step with IASs then fills in the low-density “hollow” to get a better estimate of the class-specific prototypes. By pulling the samples to the corresponding prototype, the χ-structured metric learning makes intra-class samples compact in subsection 3.3. Following subsection 2.2, we investigate the Colored MNIST dataset. Results on other datasets are consistent. 3.1 SCORING SAMPLES WITH A χ-SHAPE PATTERN From the observations in the previous section, we aim to collect IASs to reveal how BC samples shift and leave the intra-class “hollow” between them and BA ones. As discussed in subsection 2.2, the vanilla model fits BC samples later than BA ones, which motivates us to score the samples from their training dynamics. Once we have the score pattern to match and distinguish BA and BC samples, IASs, with intermediate scores can be extracted and available for the next debiasing stage. In the following, we denote the posterior of the Ground-Truth class (GT-class) yi for a sample xi as Pr (hθ(xi) = yi | xi) = softmax(w⊤c fϕ(x))yi , (2) the larger the posterior, the more confident a model predicts xi with yi. For notation simplicity, we abbreviate the posterior as Pr (yi | xi). The target posterior of a BA sample reaches one or becomes much higher than other categories soon after training several epochs, while the posterior of a BC sample has a delayed increase. To sufficiently capture the clues on the change of bias attribute, we also analyze the posterior of the most obvious non-GT attribute, which reveals how the dataset bias influences a sample. Denote the model at the t-th epoch plus the superscript t, such as htθ . we take the bias class for the sample xi at epoch t as bti = argmaxc∈[C],c ̸=yi ( w⊤c fϕ(x) )t . Then, we define the non-GT bias class as the most frequent bti along all epochs, i.e., bi = max_freq{bti}Tt=1. A sample has a larger bias class posterior when it has low confidence in its target class and vice versa. Taking posteriors of both yi and bi into account, a BA sample has larger Pr (yi | xi) and small Pr (bi | xi) along all its training epochs. For a BC sample, Pr (yi | xi) increases gradually and meanwhile Pr (bi | xi) decreases. We verify the phenomenon on Colored MNIST dataset in Figure 4 (left). For BA samples (yellow “1”), the two curves demonstrate a “rectangle”, while for BC samples (blue “1”), the two curves have an obvious intersection and reveal a “χ” shape. The statistics for the change of posteriors are shown in Figure 4 (right). Therefore, how much the training dynamics match the “χ” shape reveals the probability of a sample that shifts from the major BA clusters to minor BC ones. We design a χ-shape for the dynamics of losses to capture such BC-specific properties. The change of sample-specific loss for ground-truth label and bias label over T epochs could be summarized by LCE. Then, we use two exponential χ-shape functions χpattern to capture the ideal loss shape of the BC sample, i.e., the severely shifted case. LCE (xi) = ( LgtCE (xi) = { − log Prt (yi | xi) }T t=1 LbCE (xi) = { − log Prt (bi | xi) }T t=1 ) , χpattern = ( pgt = { e−A1t }T t=1 pb = { eA2t }T t=1 ) , where A1 and A2 are the matching factors. They could be determined based on the dynamics of prediction fluctuations. For more details please see the supplementary material. The χpattern encodes the observations for the most deviated BC samples. To match the loss dynamics with the pattern, we use the inner product over the two curves: s(xi) = ⟨LCE (xi) , χshape⟩ = ⟨LgtCE (xi) ,p gt⟩+ ⟨LbCE (xi) ,pb⟩ (3) = T∑ t=1 −e−A1t · log Pr (hθ (xi) = yi | xi)− eA2t · log Pr (hθ (xi) = bi | xi) . The inner product s(xi) takes the area under the curves (AUC) into account, which is more robust w.r.t. the volatile loss changes. When s(xi) score goes from low to high, the sample varies on the bias level, i.e., from BA samples to IASs, and then to BC samples. 3.2 WHERE IASS ARE AND WHY IASS CAN HELP TO LEARN A DEBIASED MODEL? Combining the analysis in subsection 2.2 and collecting ranked samples by s(xi), we find there are two types of IASs according to the representation near the target attribute decision boundary (as “0” for complex shapes in Figure 3), or that of the bias attribute (as helicopter in intermediate transitional “sunset” background). (1) If an IAS has an intermediate target attribute value, it may be a difficult samples and contains rich information about the target class boundaries. (2) If an IAS is in an intermediate state on the bias attribute, it may help to fill in the intra-class vacant “hollow” when conditioning (fixing) on the target attribute. Both types of IASs are similar to BC samples but from two directions, i.e., compared to the BA samples, they contain richer semantics on target or bias attributes. In the representation space, they are scattered between BA and BC samples, compensating for the sparsity of BC samples and valuable for debiasing. We will show how χ-structured objective with IASs help to remodel the true class centers in the following subsection. We illustrate the importance of IASs with simple experiments on biased Colored-MNIST and Corrupted CIFAR-10 datasets. Details of the datasets are described in subsection 4.1. We investigate whether various reweighting strategies on the vanilla model improve the generalization ability over an unbiased test set. We use “0-1” to denote the strategy that utilizes only the BC samples. “step-wise” means we apply uniformly higher (ratio of BA samples) and lower weights (one minus above ratio) to BC and BA samples. Our “χ-pattern” smoothly reweights all samples with the matched scores, where BC samples as well as IASs have relatively larger weights than the remaining BA ones. The results are listed in Table 1, where we find that simple reweighting strategies easily improve the performance of a vanilla classifier, which verifies the importance of emphasizing BC-like samples. Our “χ-pattern” gets the best results in most scenarios, indicating that higher resampling weights on the IASs and BC samples assist the vanilla model to better frame the representation space. 3.3 LEARNING DEBIASED REPRESENTATION FROM A χ-STRUCTURED OBJECTIVE Although the BA samples are severely biased towards the bias attribute, the BC samples, integrating the rich bias attribute semantics, naturally make the representation independent of the biased influence (Hong & Yang, 2021). An intuitive approach for debiasing is to average over BC samples and Biased Model Unbiased Model classified by the BC class centers. However, the sparsity of BC samples induces an erratic estimation which is far from the true class center, as shown in Figure 2. Benefiting from the analysis that BC-like IASs better estimate the intra-class structure, we target conditional interpolating around it, i.e., mixing the same-class samples with different BC-like scores to remodel the intermediate samples between BA and BC samples. From that, we can construct many prototypes closer to the real class center and pull samples to these prototypes to compact the intra-class space. Combined with the soft ranking score from the χ-pattern in the previous stage, we build two pools (subsets) of the samples denoted as D∥ and D⊥. The D⊥ pool collects the top-rank sampels and most of them are BC samples and IASs. The D∥ pool is sampled from the remaining (BA) part according to the score. With the help of D∥ and D⊥, we construct multiple bias bags (subset) Bγ with bootstrapping where the ratio of BC samples is γ. Bγ = { (xi, yi) ∣∣ num (D⊥) : num (D∥) = γ} , (4) where num (D) equals the number of samples in D. When γ is low to high, the Bγ contains samples ranging from the extremes of BA samples to the IASs, and then to the BC ones. Based on Bγ , we compute the prototype, i.e., averaged on Bγ to interpolate bias attribute conditioned on the particular target attribute. For example, the prototype conditioned on class c is formalized as pγ,c: pγ,c = 1 K ∑ (xi,yi)∈Bγ fϕ (xi) · I [yi = c] . (5) To further demonstrate the significance of intra-class compactness, we design the experiments to study the difference between a biased vanilla model and the unbiased oracle model (well-trained on an unbiased training set). We measure the mean distance between samples and their multiple conditional interpolated prototypes with changing ratio γ. If the prototypes are shifted with changing γ, that indicates a large intra-class deviation exists. As shown in Figure 5, for a biased model, when γ decreases, pγ is interpolated closer to the BA samples. Opposite phenomena are observed in the BC samples. As for the unbiased oracle model, no matter how the BC ratio γ changes, such mean distance is almost unchanged and shows a lower variance. This coincides with the observation in Figure 2. Motivated by mimicking the oracle, we adopt the conditional interpolated prototypes and construct a customized χ-structured metric learning task. Assuming γ is large, we use pγ and p1−γ to denote prototypes in bias bags B with high and low BC ratios. The model is required to be more concerned with pulling the majority of low BC ratio bias bag B1−γ closer to pγ , which interpolated into the high BC space. Similarly, the high BC bias bag Bγ should be pulled to low BC interpolated p1−γ . We optimize the cross-entropy loss LCE to enable the pulling operation. Concretely, the posterior via the distance d (·, ·) in the representation space is formalized as: Pr (yi | xi) = exp (−d (fϕ (xi) ,pγ,yi) /τ)∑ c∈[C] exp (−d (fϕ (xi) ,pγ,c) /τ) , (6) where τ is a scaled temperature. One of the branches of the χ-structure classification task is optimizing the LCE between samples in the B1−γ and pγ . Similarly, the other branch is optimizing between Bγ and p1−γ at the same time. As shown in Figure 3, such a high-and-low correspondence captures and compacts the intra-class “hollow”. In summary, The bias bags of high BC ratios Bγ with corresponding low BC interpolated prototypes conditioning on the target attribute p1−γ , and B1−γ with pγ form the χ-structure crossover objective. 4 EXPERIMENTS We conduct experiments to verify whether χ2-model has effective debiasing capability. We begin by introducing bias details in each dataset (as in subsection 4.1). We present the comparison approaches and training details. In subsection 4.2, the experiments show that χ2-model achieves superior performance in each stage. Furthermore, we experimentally exemplify the inherent quality of the prototype-based classification for the debiasing task and offer the ablation studies in subsection 4.3. 4.1 EXPERIMENTAL SETUPS Table 3: The classification performance on the unbiased CelebA and NICO test set. The data source BA denotes the measurement on BA samples and BC is corresponding the BC samples. Data Biased CelebA NICO Source BA BC All All LfF 73.69 70.41 72.05 34.44 DFA 94.01 58.98 76.50 33.10 χ2-model 97.66 60.79 79.23 36.99 Datasets. To cover more general and challenging cases of bias impact, we validate χ2-model in a variety of datasets, including two synthetic bias datasets (Colored MNIST (Bahng et al., 2020), Corrupted CIFAR-10 (Nam et al., 2020)) and two real-world datasets (Biased CelebA (Liu et al., 2015) and Biased NICO). The BA samples ratio ρ in the training set is usually high (over 95%), so the bias attribute is highly correlated with the target label. For example in the Colored MNIST dataset, each digit is associated with one of the pre-defined bias colors. Similarly, there is an object target with corruption bias in Corrupted CIFAR-10 and a gender target with hair color bias in Biased CelebA. Following the previous works (Hong & Yang, 2021), we use the BA ratio ρ ∈ {95.0%, 99.0%, 99.5%, 99.9%} for Colored MNIST and Corrupted CIFAR-10, respectively, and approximately 96% for Biased CelebA. The Biased NICO dataset is dedicatedly sampled in NICO (He et al., 2021), initially designed for OOD (Out-of-Distribution) image classification. NICO is enriched with variations in the object and context dimensions. We select the bias attribute with the highest co-occurrence frequency to the target one, e.g., helicopter to sunset in training set correlates strongly (see BA samples in Figure 3). The correlation ratio is roughly controlled to 86%. For more details please see the supplementary material. Baselines. We carefully select the classic and the latest trending approaches as baselines: (1) Vanilla model training with cross entropy as described in subsection 2.1. (2) Biastailored approaches with pre-provided bias type: RUBi, Rebias. (3) Explicit approaches under the guidance of total bias supervision: EnD and DI. (4) Implicit methods through general bias properties: LfF and DFA. Implementation details. Following the existing popular benchmarks (Hong & Yang, 2021; Kim et al., 2021), we use the four-layer CNN with kernel size 7 × 7 for the Colored MNIST dataset and ResNet-18 (He et al., 2016) for Corrupted CIFAR-10, Biased CelebA, and Biased NICO datasets. For a fair comparison, we re-implemented the baselines with the same configuration. We mainly focus on unbiased test accuracy for all categories. All models are trained on an NVIDIA RTX 3090 GPU. More details are in the supplementary material. Baselines for the first stage. To better demonstrate the effectiveness of χ-pattern, we consider related sample-specific scoring methods (Pleiss et al., 2020; Zhao et al., 2021) and report average precision, top-threshold accuracy, and the minimum samples (threshold) required for 98% accuracy. For more results, such as PR curves, please see the supplementary material. 4.2 QUANTITATIVE EVALUATION Table 4: The performance of BC samples mining on Colored MNIST with 99.5% BA ratio. Acc. denotes mean accuracy of ranking with top-300. 98%-σ denotes the number of samples required to contain 98% of BC samples. AP is average precision. ↑ means higher is better, while ↓ is the opposite. Measure Acc. ↑ 98%-σ ↓ AP ↑ Entropy(Joshi et al., 2009) 78.33 632 83.52 Confidence(Li & Sethi, 2006) 80.33 590 85.61 Loss(Nam et al., 2020) 94.39 418 98.22 Pleiss et al. (2020) 82.67 686 89.24 Zhao et al. (2021) 90.33 451 96.04 χ-pattern 95.84 372 98.44 Performance of χ-shape pattern. As shown in Table 4, our χ-pattern matching achieves state-of-the-art performance on various evaluation metrics. Thus, the χ-structure metric learning objective can leverage more IASs cues to interpolate bias attribute and further learn the debiased representation. χ2-model in different types of bias constructions. (1) Synthetic bias on Colored MNIST and Corrupted CIFAR-10: From Table 2 we find that under extreme bias influence, as ρ is 99.9%, the performance of the vanilla model and other baselines decreases catastrophically. In contrast, Our χ2-model maintains the robust and efficient debiasing capability on the unbiased dataset. Further, more results in Figure 2 present the remarkable performance of our χ2-model compared to other methods. (2) Real-world bias on Biased CelebA and Biased NICO: Table 3 shows that compared to the recent methods which do not pre-provide any bias information in advance as the same as ours, our method also achieves a remarkable performance. The above experiments indicate that conditional interpolation among IASs feedback the shift of the intrinsic knowledge and facilitate learning debiased representations even in extremely biased conditions. 4.3 FURTHER ANALYSIS The inherent debiasing capability of prototype-based classification. We directly construct the prototype by averaging the trained representations of the vanilla model (as in Table 2 line two named “+ p”). The results show that on some datasets like Colored MNIST, the prototype-based classifier without training achieves performance improvement. Visualize the test set representation on 2D embedding space via t-SNE. Figure 6 shows the 2D projection of the feature extracted by χ2-model on Colored MNIST. We color the target and bias attributes separately. The representations follow the target attribute to cluster into classes which indicates that our model learns the debiased representations. Ablation studies. We further perform the ablation analysis of the matching factors A1, A2 in Eq. ??, which directly determine the χ-shape curves. The results show the first stage of χ2-model is robust to changes in hyperparameters. For more related experiments like on different BC identification thresholds, please see the supplementary material. 5 CONCLUSION Although intra-class biased samples with a “hollow” structure impede learning debiased representations, we propose the χ2-model to leverage Intermediate Attribute Samples (IASs) to capture how the samples with intrinsic attribute shift. χ2-model works in a two-stage manner, matching and ranking possible IASs based on their χ-shape training dynamics followed by a χ-branch metric-based debiasing objective with conditional attribute interpolation. Appendix • Appendix A: An example of the color-biased model. • Appendix B: Additional related work (cf. subsection 2.1). • Appendix C: Implementation details and hyper-parameter settings (cf. subsection 4.1). • Appendix D Additional experiments, ablation studies, and robustness analysis. • Appendix E Overall algorithm. • Appendix F Discussion about the limitations. A AN EXAMPLE OF THE COLOR-BIASED MODEL ON orange LIFEBOAT 救生艇_0.5703651905059814-串联自行车_0.1379992961883545-玩具店_0.08981618285179138-山地自行车_0.04711327701807022-橄榄球头盔 _0.03929492458701134 独木舟_0.2981044054031372-快艇_0.19624267518520355-水陆两用车_0.17111073434352875-船桨 _0.13951753079891205-湖边_0.08463700115680695 救生艇_0.9784454107284546-灯塔_0.004665153566747904-集装箱船_0.003798476653173566-码头 _0.003606501966714859-消防船_0.0030068345367908478 B RELATED WORK FROM WHAT BIAS INFORMATION PROVIDED IN ADVANCE There are various methods of learning a debiased model from a biased training set. Debiasing under the guidance of bias supervision. This thread of methods introduces full explicit bias attribute supervision and an additional branch of the model to predict the label of the bias. Kim et al. (2019) leverage bias clues to minimize the mutual information between the representation and the bias attributes with gradient reversal layers (Ganin et al., 2016). Similarly, Li & Vasconcelos (2019) perform RGB vector as color side information to conduct the minimax bias mitigation. (Clark et al., 2019; Wang et al., 2020) utilize the auxiliary bias instruction to train the relevant independent models and ensemble their predictions. (Sagawa et al., 2020; Goel et al., 2021) balance the performance of bias subgroups over distribution shift. (Tartaglione et al., 2021; Cheng et al., 2021) directly regularize the bias attribute to disentangle the confused bias representations. Debiasing with bias prior knowledge. Many real-world applications limit access to sufficient bias supervision. However, a relaxed condition could be met to provide prior knowledge of the bias (e.g., bias type). Many methods highlight that the content bias type plays an important role in CNN object recognition (Hendricks et al., 2018; Geirhos et al., 2019; Li et al., 2021). Based on such observations, several approaches adopt the bias type to build a bias-capturing module. Wang et al. (2019) remove texture bias through latent space projection with gray-level co-occurrence matrix (Lam, 1996). Bahng et al. (2020) encourage the debiased model to learn independent representation from a designed biased one. Other approaches mitigate the dataset bias existing in natural language processing with logits re-weighting (Cadène et al., 2019). Debiasing through general intrinsic bias properties. Towards more practical applications, this line of methods takes full advantage of the bias property, which does not require either explicit bias supervision or pre-defined bias prior knowledge. Nam et al. (2020) make a comprehensive analysis on the properties of bias. The observations indicate a two-branch training strategy — a biased model trained with Generalized Cross-Entropy loss (Zhang & Sabuncu, 2018) amplifying its “prejudice” on BA samples, and a debiased model focuses more on samples that go against the prejudice of the biased one. Similarly, Lee et al. (2021) fit one of the encoders to the bias attribute and randomly swap the latent features to work as augmented BC samples. Other approaches also consider the model learning shortcuts revealed by the high gradients of latent vectors (Darlow et al., 2020; Huang et al., 2020). C IMPLEMENTATION DETAILS C.1 TRAINING DYNAMIC VISUALIZATION OF FIGURE 2 To visualize the 2D attribute boundary, we first add an extra linear projection layer wproj ∈ Rd×2 behind the feature extraction network and correspondingly modify the top-layer classifier wc to classify on 2D features. After training is completed, we directly present the 2D features of the data and the top-layer classifier in Figure 2. Secondly, to compare different attributes and feedback on their gradients fairly, we jointly train the attribute classifier with a shared feature extraction network. This ensures their features are consistent and comparable to the classifiers with different attributes. Figure 2 shows the results of the above model trained on Colored MNIST with a BA ratio of 0.95, where the learning rate is 0.00001. The two digit (shape) classes in the figure are 2 and 8. Correspondingly, the two color classes are purple and green. The samples 2 in purple and the 8 in green are BA ones. In contrast, the samples 2 in green and the 8 in purple are BC ones. The ratio of BA and BC samples is roughly 0.95. C.2 DATASETS Colored MNIST. Following most of the previous work (Nam et al., 2020; Hong & Yang, 2021; Lee et al., 2021), we construct the Colored MNIST by coloring each digit and keeping the background black, in other words, every target attribute digit in the Colored MNIST is highly correlated with a specific bias attribute color. The degree of severity we chose to calibrate the dataset bias difficulty was 1 as in previous works. The different bias-aligned (BA) ratios contains different BA samples, e.g., in the ratio of 99.9% we have 59940 BA samples and 60 bias-conflicting (BC) samples in the training set. Similarly, the ratio of 99.5% has {59940, 60} BA and BC samples, correspondingly. In the same way, for other ratios of BA and BC samples, the ratio of 99.0% is {58, 402; 598} and the ratio of 95.0% is {57, 000; 3000}. Corrupted CIFAR-10. For the Corrupted CIFAR dataset, we follow the earlier work (Lee et al., 2021) and choose 10 corruption types, i.e., {Snow, Frost, Fog, Brightness, Contrast, Spatter, Elastic, JPEG, Pixelate, Saturate}. The corruption type is highly correlated with the target ones as PLANE, CAR, BIRD, CAT, DEER, DOG, FROG, HORSE, SHIP, and TRUCK. Similarly, we choose severity 1 in the original paper (Nam et al., 2020). The number of BA samples and BC samples for each ratio of BA ones are: 99.9%-{49, 950; 50}, 99.5%-{49, 750; 250}, 99.0%-{49, 500; 500}, 95.0%- {47500; 2, 500}. Biased CelebA. Following the experimental configuration of previous works, We intentionally truncated a portion of the CelebA dataset so that each target attribute containing BlondHair or not was skewed towards the bias attribute of Male. The number of target bias, i.e., BlondHair-Male is as follows: BC samples like BlondHair equals 0 with Male equals 0 contains 1, 558 and {1 -1 : 1, 098}. The BA samples is {1 -0 : 18, 279} and {0 -1 : 53, 577}. Biased NICO. The Biased NICO dataset is dedicatedly sampled in NICO (He et al., 2021), which is originally designed for Non-I.I.D. or OOD (Out-of-Distribution) image classification. NICO is enriched with variations in the object and context dimensions. Concretely, there are two superclasses: Animal and Vehicle: with 10 classes as BEAR, BIRD, CAT, COW, DOG, ELEPHANT, HORSE, MONKEY, RAT and SHEEP for Animal, and 9 classes as AIRPLANE, BICYCLE, BOAT, BUS, CAR, HELICOPTER, MOTORCYCLE, TRAIN and TRUCK for Vehicle. Each object class has 9 or 10 contexts. We select the bias attribute with the highest co-occurrence frequency with the target one, i.e., DOG on snow, BIRD on grass, CAT eating, BOAT on beach, BEAR in forest, HELICOPTER in sunset, BUS in city, COW lying, ELEPHANT in river, MOTORCYCLE in street, MONKEY in water, TRUCK on road, RAT at home, BICYCLE with people, AIRPLANE aside mountain, SHEEP walking, HORSE running, CAR on track, TRAIN at station. The quantitative details of each class are shown in Table 5. Similarly, The details divided by bias attribute are shown in Table 6, The remaining bias attributes that do not appear in the BA samples are: {at wharf, at airport, aside traffic light, eating grass, white, in cage, in hole, in garage, cross bridge, at park, yacht, flying, aside tree, black, standing, sitting, at night, double decker, on sea, around cloud, with pilot, in sunrise, in hand, on booth, aside people, at sunset, brown, on shoulder, spotted, subway, in race, climbing, cross tunnel, velodrome, on bridge, shared, at yard, in circus, on ground, on tree, at heliport, taking off, on branch, wooden, sailboat, in zoo}, which are few in number, about 4 of each. In the test set they are balanced with the remaining bias attributes. The training set’s total correlation ratio is roughly 86.27%. C.3 DATA PRE-PROCESSING The image sizes of Colored MNIST and Corrupted CIFAR-10 are 28× 28 and 32× 32, respectively. We feed the original images into the model and do not use data augmentation transformations during training and testing. We directly normalize the data from Colored MNIST and Corrupted CIFAR-10 by the mean of (0.5, 0.5, 0.5) and the standard deviation of (0.5, 0.5, 0.5). In the real-world datasets, for the training phase of Biased CelebA, we first resize the images to a size of 224× 224, and then apply the RandomHorizontalFlip transformation. As for the Biased NICO dataset, following most of the previous works (Zhang et al., 2021a), we append the RandomHorizontalFlip, ColorJitter, RandomGrayscale transformations after the RandomResizedCrop to 224× 224. For both of them, during the test, we only resize the images. We normalize these real-world datasets by the mean of (0.485, 0.456, 0.406) and the standard deviation of (0.229, 0.224, 0.225). C.4 TRAINING DETAILS Our code is based on the PyTorch library. Following the previous work (Hong & Yang, 2021), We use the four-layer convolutional neural network with kernel size 7× 7 for the Colored MNIST dataset and ResNet-18 (He et al., 2016) for Corrupted CIFAR-10, Biased CelebA, Biased NICO datasets. For all methods and datasets, we do not consider loading any additional pretrained weights to allow the models represent the pure debiasing capability. In the training phase, we use Adam optimizer and cosine annealing learning rate scheduler. For all datasets, the batch size is selected from {64, 128, 256}. Correspondingly, the learning rate is from {0.0001, 0.0005, 0.001, 0.005}, and the smaller ones are used for training the vanilla model. For all methods, including the reproduced comparison ones, we train the model for 200 epochs on Colored MNIST, Corrupted CIFAR-10, while training 50 and 100 epochs on Biased CelebA and Biased NICO, respectively. χ2-model. • For the first stage: We train the 1000 epochs vanilla model with a learning rate 1e-5 on Colored MNIST, 5e-3 on Corrupted CIFAR-10, 5e-5 on Biased CelebA and 1e-3 on Biased NICO to extract the training dynamics. In practice, we design the Area Under Score (AUS) strategy to capture the training dynamics. All comparison methods leverage epoch-specific scores, and AUS applies to these methods, e.g., Loss is the calculated all the epoch-level loss summations. We generally use the ratio of divided BC samples as a hyperparameter. We find that a slightly larger BC ratio brings better results in our experiments, as detailed in Table 8. In addition, for the IASs importance verification experiments in Table 1, the “step-wise” setting indicates we apply uniformly higher and lower sampling weights to BC and BA samples. The unified weights are related to the BA ratio ρ in the whole dataset, i.e., the weight on BC samples is ρ and on the BA ones is 1− ρ. • In the second stage: χ-branch metric learning objective, we first construct the data pools D∥ and D⊥ with the ranking. The BC identification threshold to split those two data pools can be adjusted to a suitable value without knowing the ground-truth dataset BC ratio. To observe the IASs validity and unify the style, we report the results one level higher in {0.999, 0.995, 0.99, 0.95} than the dataset BC ratio in the main text, i.e., the threshold is 0.99 if the dataset BC ratio is 0.995. See more details in subsection D.4 and Table 14. We construct different ratios of bias bags {Bγ , B1−γ } and mixed prototypes {pγ , p1−γ } by bootstrapped sampling a batch containing almost the same number of BA and BC samples using the first stage χ-pattern score (described in subsection 3.3). The more numerous part is the one that contains all the samples in that part of the batch, e.g., for a large γ with a majority of the BC part, Bγ contains all the BC samples in the above batch. In this case, the remaining 1 − γ ratio of BA samples are sampled uniformly in the batch. The mixed prototype pγ and p1−γ are extracted and constructed similarly. The mixed ratios γ are from {0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}. As described in the paper, the computation of LCE (pγ , B1−γ ) and LCE (p1−γ , Bγ ) will yield different ratios of mixed prototypes interacting with BA or BC sample. In this process, we set the temperature τ of the mixed prototypes metric-based prediction (as in Eq. 8) from {0.01, 0.05, 0.1}. Our model’s average training time with NVIDIA RTX 3090 GPU is about 1.8x faster than that of LfF (Nam et al., 2020). D ADDITIONAL EXPERIMENTS D.1 MORE OBSERVATIONS AND RESULTS IN THE FIRST STAGE In the main text, we have shown the change of posterior over the GT-class and the bias one in Figure 4 with four typical samples of BC samples, intermediate attribute samples, and BA samples. Here we show more observations on the whole training set in a statistical significance. • As shown in Figure 8, the vertical axis of the left two columns figures is the quantity and the horizontal axis is the epoch of model training. Each point on the curve represents how many samples are predicted as GT-class, Bias class or Others by the current epoch model. The first column figures represent the prediction on BA samples, while the second column represents the prediction on BC ones. It can be found that for BC samples, even at the dataset level, the vanilla model always predicts them as Bias class first. It is consistent with our observation in the original paper, in fact, this is another interpretation of the right half of Figure 4 in the paper. • The right two columns of Figure 8 also represent more statistical information at the dataset level, e.g., the third column shows the χ-shaped prediction of BC samples over the whole dataset as training epoch increases. This corresponds to the left half of Figure Figure 4 in the paper. The last column figures shows the change of the loss. It can be found that the loss on the BC sample corresponds to the lower branch of the χ-shaped curve in the paper. Further, we show more BA sample identification results of the first stage over various ratios. In Table 9, we display their top-ratio accuracy, e.g., taking the top ranking with the number of BC samples in the full training set to calculate how many ground truth BC samples they contain. In addition, we also present the average precision in Table 10. Moreover, we plot the PR curves of various methods in the first stage on Colored MNIST and Biased NICO datasets in Figure 9 and Figure 10. The results show that our method maintains excellent performance. D.2 RESULTS WITH ERROR BARS We run our methods and the comparison methods like vanilla and Learning from Failure (LfF) multiple times and report error bars. We present the full results with both 95% confidence interval as Table 13 and the standard deviation in Figure 11. D.3 ABLATION STUDY OF χ-BRANCH METRIC LEARNING OBJECTIVE In order to verify whether the effectiveness of our method is indeed derived from our χ-branch metric learning objective. We first remove one of the mixed prototypes and bias bag losses as “−LCE (pγ , B1−γ )” in Table 8. This substantially lose the metric-based push relationship between the BA samples and the high BC ratio prototypes pγ . Next, we also drop another branch of the prototypes training, i.e., attenuate the effect of most BC samples on a low ratio of mixed prototypes p1−γ . This reduces the debiasing capability using the general properties of the Figure 5 in original paper. The results show that our method with χ-branch objective is significantly better than the single branch at 99.9%, 99.5% and 99.0%. It achieves the same superior level at 95.0%. Especially in the extreme environment, i.e., when the BC samples are rare, the χ-branch can further improve the model performance and overcome the debiasing problem comprehensively. D.4 ROBUSTNESS OF THE χ2-MODEL WITH VARYING BC IDENTIFICATION THRESHOLDS For χ2-model, we use the BC identification thresholds to split D∥ and D⊥. We show the influence of different thresholds in the Table 14, where the vertical axis represents the ground-truth ratio of BA samples included in the dataset. The horizontal axis represents the ratio of BA samples used as hyperparameters in the χ-model. From this result, we can find that the model is less affected by the thresholds. Furthermore, since the Bias Bag {Bγ , B1−γ } is constructed taking into account the presence of IASs. Based on bootstrapped sampling, the BC identification threshold learning is already embedded in the first stage χ-pattern scores. E OVERALL ALGORITHM In Algorithm 1, we show the entire pseudo-code of this work. F DISCUSSION ABOUT THE LIMITATIONS In this paper, we adopt a new two-stage χ2-model. However, the first stage still requires training the long-epoch vanilla model as a weaker bias-capture mechanism. When two attributes have an equal learning difficulty and jointly determine the target label, our method may encounter difficulties. Algorithm 1 Training for χ2-model Require: Biased training data Dtrain = {(xi, yi)}Ni=1. 1: First stage: χ-shape pattern. 2: Train a vanilla model θ on Dtrain with cross entropy loss as mentioned in Equation 1: 3: LCE = E(xi,yi)∼Dtrain [− log Pr (hθ(xi) = yi | xi)] . 4: Consider the T epochs change on ground-truth label yi and bias label bi (xi, hθ): 5: LCE (xi) = ( LgtCE (xi) = { − log Prt (yi | xi) }T t=1 LbCE (xi) = { − log Prt (bi (xi, hθ) | xi) }T t=1 ) . 6: Capture the BC sample with two exponential χ-shape functions: 7: χshape = ( pgt = { e−At }T t=1 pb = { eAt }T t=1 ) . 8: Compute the ranking score s(xi) with the inner product over two curves as Equation 3: 9: s(xi) = ⟨LCE (xi) , χshape⟩ = ⟨LgtCE (xi) , p gt⟩+ ⟨LbCE (xi) , pb⟩ = T∑ t=1 −(e−At) log Pr (hθ (xi) = yi | xi)− (eAt) log Pr (hθ (xi) = bi (xi, hθt) | xi) . 10: Second stage: χ-branch metric learning objective. 11: for each step do 12: Construct multiple bias bags Bγ with bootstrapping as Equation 4: 13: Bγ = { (xi, yi) ∣∣ NUM(D⊥) : NUM (D∥) = γ} , 14: where the ratio of BC samples is γ. 15: Build the prototype p for class c based on Bγ as Equation 5: 16: pγ,c = 1 K ∑ (xi,yi)∈Bγ fϕ (xi) · I [yi = c] . 17: Consider a high γ: 18: for all samples xi ∈ B1−γ do 19: Classify with pγ as Equation 6: 20: Pr (yi | xi) = exp (−d (fϕ (xi) ,pγ,yi) /τ)∑ c∈[C] exp (−d (fϕ (xi) ,pγ,c) /τ) . 21: Compute LCE (pγ , B1−γ ). 22: end for 23: for all samples xi ∈ Bγ do 24: Classify with p1−γ as mentioned before. 25: Compute LCE (p1−γ , Bγ ). 26: end for 27: Compute ∇ϕ LCE (pγ , B1−γ ) + LCE (p1−γ , Bγ ). 28: Update ϕ with ∇ϕ. 29: end for
1. What is the focus of the paper regarding debiasing, and what are the proposed methods? 2. What are the strengths and weaknesses of the paper, particularly in its comparisons with other works? 3. Do you have any questions regarding the x-shape function and its application in the proposed method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a x^2 model to interpolate between intra-class(attribute) for debiasing. The authors observe that training dynamics seems to be like x-shape pattern, where bias-conflicted samples are first predicted as a wrong class first and then switched to target class. Based on this observation, they first fit the sample-wise losses to x-shape pattern to rank the samples. Then, they construct multiple bias bags for bootsrapping based on the scores and build the prototype for each bag. Finally, they train the final model with x-structured objective. Strengths And Weaknesses The proposed method achieved meaningful performance improvement, but there are many missing comparisons. For example, a plenty of debiasing approaches have been reported on CelebA dataset [1,2,3,4,5,6,7,8], but only two of them are compared. The authors said that (Pleiss et al, Zhao et al) are their related methods so it is adopted as baselines. But there is no explanations about this work, even in related work section. How did you decide the form of x-shape function? There is no theoretical justification for this function, so it is hard to understand why top ranking samples by s(x) is important. (p^b doesn’t also seems to be well aligned to red curves in Fig. 4.) How do you choose A1 and A2 in x-shape function? How sensitive are the results to the values of A1 and A2? Instead of x-shape function, it may be possible to apply generalized cross entropy introduced in LfF to split BA and BC samples. Also, as in [3], one can conduct clustering for each class, get multiple centroids, and conduct stage 2. All these possible discussions for verifying the effectiveness of x-shape function are missing. [1] Daniel Levy et al. Large-scale methods for distributionally robust optimization. In NeurIPS 2020 [2] Evan Liu et al. Just train twice: Improving group robustness without training group information. In ICML 2021 [3] Seo et al. Unsupervised Learning of Debiased Representations with Pseudo-Attributes, In CVPR 2022 [4] Sagawa et al. Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization. In ICLR 2020 [5] Creagar et al. Environmental inference for invariant learning. ICML 2021 [6] Kim et al. Learning debiased classifier with biased committee. NeurIPS 2022 [7] Zhang et al. Correct-n-contrast: A contrastive approach for improving robustness to spurious correlations. arXiv 2021 [8] Polina et al, Last layer re-training is sufficient for robustness to spurious correlations, arXiv 2022 Clarity, Quality, Novelty And Reproducibility The writing is not easy to follow throughout the paper, which should be improved thoroughly. There are so many missing or insufficient explanations. For example, which weights are used for BC and BA samples in Step-wise in Table 1? Where is the explanation of Table 2? How do you set \gamma in Eq. (4) for each dataset? In Table 4. what does entropy/confidence/loss mean? Does this table present the ablation study that the stage 2 is fixed and only the stage 1 (scoring method) is varied? X-shape pattern seems to be not novel, as (Nam et al.) already showed different training dynamics between BC and BA samples. No source codes are attached, but it provides pseudo codes.
ICLR
Title Simultaneous Classification and Out-of-Distribution Detection Using Deep Neural Networks Abstract Deep neural networks have achieved great success in classification tasks during the last years. However, one major problem to the path towards artificial intelligence is the inability of neural networks to accurately detect samples from novel class distributions and therefore, most of the existent classification algorithms assume that all classes are known prior to the training stage. In this work, we propose a methodology for training a neural network that allows it to efficiently detect outof-distribution (OOD) examples without compromising much of its classification accuracy on the test examples from known classes. Based on the Outlier Exposure (OE) technique, we propose a novel loss function that achieves state-of-the-art results in out-of-distribution detection with OE both on image and text classification tasks. Additionally, we experimentally show that the combination of our method with the Mahalanobis distance-based classifier achieves state-of-the-art results in the OOD detection task. 1 INTRODUCTION Modern neural networks have recently achieved superior results in classification problems (Krizhevsky et al., 2012; He et al., 2016). However, most of the classification algorithms proposed so far make the assumption that data generated from all the class conditional distributions are available during training time i.e., they make the closed-world assumption. In an open world environment (Bendale & Boult, 2015), where examples from novel class distributions might appear during test time, it is necessary to build classifiers that are able to detect OOD examples while having high classification accuracy on known class distributions. It is generally known that deep neural networks can make predictions for out-of-distribution (OOD) examples with high confidence (Nguyen et al., 2015). High confidence predictions are undesirable since they consist a symptom of overfitting (Szegedy et al., 2015). They also make the calibration of neural networks difficult. Guo et al. (2017) observed that modern neural networks are miscalibrated by experimentally showing that the average confidence of deep neural networks is usually much higher than their accuracy. A simple yet effective method to address the problem of the inability of neural networks to detect OOD examples is to train them so that they make highly uncertain predictions for examples generated by novel class distributions. In order to achieve that, Lee et al. (2018a) defined a loss function based on the Kullback-Leibler (KL) divergence metric to minimize the distance between the output distribution given by softmax and the uniform distribution for samples generated by a GAN (Goodfellow et al., 2014). Using a similar loss function, Hendrycks et al. (2019) showed that the technique of Outlier Exposure (OE) that draws anomalies from a real and diverse dataset can outperform the GAN framework for OOD detection. Using the OE technique, our main contribution is threefold: • We propose a novel loss function consisting of two regularization terms. The first regularization term minimizes the l1 norm between the output distribution given by softmax and the uniform distribution which constitutes a distance metric between the two distributions (Deza & Deza, 2009). The second regularization term minimizes the Euclidean distance between the training accuracy of a DNN and its average confidence in its predictions on the training set. • We experimentally show that the proposed loss function outperforms the previous work of Hendrycks et al. (2019) and achieves state-of-the-art results in OOD detection with OE both on image and text classification tasks. • We experimentally show that our proposed method can be combined with the Mahalanobis distance-based classifier (Lee et al., 2018b). The combination of the two methods outperforms the original Mahalanobis method in all of the experiments and to the best of our knowledge, achieves state-of-the-art results in the OOD detection task. 2 RELATED WORK Yu et al. (2017) used the GAN framework (Goodfellow et al., 2014) to generate negative instances of seen classes by finding data points that are close to the training instances but are classified as fake by the discriminator. Then, they used those samples in order to train SVM classifiers to detect examples from unseen classes. Similarly, Kliger & Fleishman (2018) used a multi-class GAN framework in order to produce a generator that generates a mixture of nominal data and novel data and a discriminator that performs simultaneous classification and novelty detection. Hendrycks & Gimpel (2017) proposed a baseline for detecting misclassified and out-of-distibution examples based on their observation that the prediction probability of out-of-distribution examples tends to be lower than the prediction probability for correct examples. Recently, Corbière et al. (2019) also studied the problem of detecting overconfident incorrect predictions. A single-parameter variant of Platt scaling (Platt, 1999), temperature scaling, was proposed by Guo et al. (2017) for calibration of modern neural networks. For image data, based on the idea of Hendrycks & Gimpel (2017), Liang et al. (2018) observed that simultaneous use of temperature scaling and small perturbations at the input can push the softmax scores of in- and out-of-distribution images further apart from each other, making the out-of-distribution images distinguishable. Lee et al. (2018a) generated GAN examples and forced the neural network to have lower confidence in predicting their classes. Hendrycks et al. (2019) substituted the GAN samples with a real and diverse dataset using the technique of OE. Similar works (Malinin & Gales, 2018; Bevandić et al., 2018) also force the model to make uncertain predictions for OOD examples. Using an ensemble of classifiers, Lakshminarayanan et al. (2017) showed that their method was able to express higher uncertainty in OOD examples. Liu et al. (2018) provided theoretical guarantees for detecting OOD examples under the assumption that an upper bound of the fraction of OOD examples is available. Under the assumption that the pre-trained features of a softmax neural classifier can be fitted well by a class-conditional Gaussian distribution, Lee et al. (2018b) defined a confidence score using the Mahalanobis distance that can efficiently detect abnormal test samples. As also mentioned by Lee et al. (2018b), Euclidean distance can also be used but with less efficiency. We prefer to call these methods Distance-Based Post-Training (DBPT) methods for OOD detection. 3 SIMULTANEOUS CLASSIFICATION AND OUT-OF-DISTRIBUTION DETECTION We consider the multi-class classification problem under the open-world assumption (Bendale & Boult, 2015), where samples from some classes are not available during training. Our task is to design deep neural network classifiers that can achieve high accuracy on examples generated by a learned probability distribution called Din while at the same time, they can effectively detect examples generated by a different probability distribution calledDout during the test phase. The examples generated by Din are called in-distribution while the examples generated by Dout are called outof-distribution (OOD). Adopting the idea of Outlier Exposure (OE) proposed by Hendrycks et al. (2019), we train the neural network using training examples sampled from Din and DOEout . During the test phase, we evaluate the OOD detection capability of the neural network using examples sampled from Dtestout , where D OE out and D test out are disjoint. Lee et al. (2018a) and Hendrycks et al. (2019) used the KL divergence metric in order to minimize the distance between the output distribution produced by softmax for the OOD examples and the uniform distribution. In our work, we choose to minimize the l1 norm between the two distributions which has shown great success in machine learning applications. Viewing the knowledge of a model as the class conditional distribution it produces over outputs given an input (Hinton et al., 2015), the entropy of this conditional distribution can be used as a regularization method that penalizes confident predictions of a neural network (Pereyra et al., 2017). In our approach, instead of penalizing the confident predictions of posterior probabilities yielded by a neural network, we force it to make predictions for examples generated by Din with an average confidence close to its training accuracy. In such a manner, not only do we make the neural network avoid making overconfident predictions, but we also take into consideration its calibration (Guo et al., 2017). Let us consider a classification model that can be represented by a parametrized function fθ, where θ stands for the vector of parameters in fθ. Without loss of generality, assume that the cross entropy loss function is used during training. We propose the following constrained optimization problem for finding θ: minimize θ E(x,y)∼Din [LCE(fθ(x), y)] subject to Ex∼Din [ max l=1,...,K ( ezl∑K j=1 e zj )] = Atr max l=1,...,K ( ezl∑K j=1 e zj ) = 1 K , ∀x(i) ∼ DOEout (1) where LCE is the cross entropy loss function and K is the number of classes available in Din. Even though the constrained optimization problem (1) can be used for training various classification models, for clarity we limit our discussion to deep neural networks. Let z denote the vector representation of the example x(i) in the feature space produced by the last layer of the deep neural network (DNN) and let Atr be the training accuracy of the DNN. Observe that the optimization problem (1) minimizes the cross entropy loss function subject to two additional constraints. The first constraint forces the average maximum prediction probabilities calculated by the softmax layer towards the training accuracy of the DNN for examples sampled from Din, while the second constraint forces the maximum probability calculated by the softmax layer towards 1K for all examples sampled from the probability distribution DOEout . In other words, the first constraint makes the DNN predict examples from known classes with an average confidence close to its training accuracy, while the second constraint forces the DNN to be highly uncertain for examples of classes it has never seen before by producing a uniform distribution at the output for examples sampled from the probability distribution DOEout . It is also worth noting that the first constraint of (1) uses the training accuracy of the neural network Atr which is not available in general. To handle this issue, one can train a neural network by only minimizing the cross entropy loss function for a few number of epochs in order to calculate Atr and then fine-tune it using (1). Because solving the nonconvex constrained optimization problem described by (1) is extremely difficult, let us introduce Lagrange multipliers (Boyd & Vandenberghe, 2004) and convert it into the following unconstrained optimization problem: minimize θ E(x,y)∼Din [LCE(fθ(x), y)] + λ1 ( Atr − Ex∼Din [ max l=1,...,K ( ezl∑K j=1 e zj )]) + λ2 ∑ x(i)∼DOEout ( 1 K − max l=1,...,K ( ezl∑K j=1 e zj )) (2) where it is worth mentioning that in (2), we used only one Lagrange multiplier for the second set of constraints in (1) instead of using one for each constraint in order to avoid introducing a large number of hyperparameters to our loss function. This modification is a special case where we consider the Lagrange multiplier λ2 to be common for each individual constraint involving a different x(i) ∼ DOEout . Note also that according to the original Lagrangian theory, one should optimize the objective function of (2) both with respect to θ,λ1 and λ2 but as it commonly happens in machine learning applications, we approximate the original problem by calculating appropriate values for λ1 and λ2 through a validation technique (Hastie et al., 2001). After converting the constrained optimization problem (1) into an unconstrained optimization problem as described by (2), it is possible that at each training epoch, the maximum prediction probability produced by softmax for each example drawn from DOEout changes, introducing difficulties in making the DNN produce a uniform distribution at the output for those examples. For instance, assume that we have a K-class classifier with K = 3 and at epoch tn, the maximum prediction probability produced by softmax for an example x(i) ∼ DOEout corresponds to the second class. Then, the last term of (2) will push the prediction probability of example x(i) for the second class towards 13 while concurrently increasing the prediction probabilities for either the first class or the third class or both. At the next epoch tn+1, it is possible that the prediction probability for either the first class or the third class becomes the maximum among the three and hence, the last term of (2) will push that one towards 13 by possibly increasing again the prediction probability for the second class. It becomes obvious that this process introduces difficulties in making the DNN produce a uniform distribution at the output for examples sampled from DOEout . However, this issue can be resolved by concurrently pushing all the prediction probabilities produced by the softmax layer for examples drawn from DOEout towards 1 K . Additionally, in order to prevent the second and the third term of (2) from taking negative values during training, let us convert (2) into the following: minimize θ E(x,y)∼Din [LCE(fθ(x), y)] + λ1 ( Atr − Ex∼Din [ max l=1,...,K ( ezl∑K j=1 e zj )])2 + λ2 ∑ x(i)∼DOEout K∑ l=1 ∣∣∣∣∣ 1K − ezl∑K j=1 e zj ∣∣∣∣∣ (3) The second term of the the loss function described by (3) minimizes the squared distance between the training accuracy of the DNN and the average confidence in its predictions for examples drawn fromDin. Additionally, the third term of (3) minimizes the l1 norm between the uniform distribution and the distribution produced by the softmax layer for the examples drawn from DOEout . While converting the unconstrained optimization problem (2) into (3), one could use several combinations of norms to minimize. However, we found that minimizing the squared distance between the training accuracy of the DNN and the average confidence in its predictions for examples drawn from Din and the l1 norm between the uniform distribution and the distribution produced by the softmax layer for the examples drawn from DOEout works best. This is because l1 norm uniformly attracts all the prediction probabilities produced by softmax to the desired value 1K , better contributing to producing a uniform distribution at the output of the DNN for the examples drawn from DOEout . On the other hand, minimizing the squared distance between the training accuracy of the DNN and the average confidence in its predictions for examples drawn from Din emphasizes more on attracting the maximum softmax probabilities that are further away from the average confidence of the DNN, making the neural network better detect in- and out-of-distribution examples at the low softmax probability levels. 4 EXPERIMENTS During the experiments, we observed that if we start training the DNN with a relatively high value of λ1, the learning process might slow down since we constantly force the neural network to make predictions with an average confidence close to its training accuracy. Therefore, it is recommended to split the training of the algorithm into two stages where in the first stage, we train the DNN using only the cross entropy loss function until it reaches the desired level of accuracy Atr and then using a fixed Atr, we fine-tune it using the combined loss function given by (3). 4.1 COMPARISON WITH STATE-OF-THE-ART IN OE The experimental setting is as follows. We draw samples from Din and we train the DNN until it reaches the desired level of accuracy Atr. Then, drawing samples from DOEout , we fine-tune it using the combined loss function given by (3). During the test phase, we evaluate the OOD detection capability of the DNN using examples from Dtestout which is disjoint from D OE out . We demonstrate the effectiveness of our method in both image and text classification tasks by comparing it with the previous OOD detection with OE method proposed by Hendrycks et al. (2019). A part of our experiments was based on the publicly available code of Hendrycks et al. (2019). 4.1.1 EVALUATION METRICS Our OOD detection method belongs to the class of Maximum Softmax Probability (MSP) detectors (Hendrycks & Gimpel, 2017) and therefore, we adopt the evaluation metrics used in Hendrycks et al. (2019). Defining the OOD examples as the positive class and the in-distribution examples as the negative class, the performance metrics associated with OOD detection are the following: • False Positive Rate at N% True Positive Rate (FPRN): This performance metric (Balntas et al., 2016; Kumar et al., 2016) measures the capability of an OOD detector when the maximum softmax probability threshold is set to a predefined value. More specifically, assuming N% of OOD examples need to be detected during the test phase, we calculate a threshold in the softmax probability space and given that threshold, we measure the false positive rate, i.e. the ratio of indistribution examples that are incorrectly classified as OOD. • Area Under the Receiver Operating Characteristic curve (AUROC): In the out-of-distribution detection task, the ROC curve (Davis & Goadrich, 2006) summarizes the performance of an OOD detection method for varying threshold values. • Area Under the Precision-Recall curve (AUPR): The AUPR (Manning & Schütze, 1999) is an important measure when there exists a class-imbalance between OOD and in-distribution examples in a dataset. As in Hendrycks et al. (2019), in our experiments, the ratio of OOD and in-distribution test examples is 1:5. 4.1.2 IMAGE CLASSIFICATION EXPERIMENTS Results. The results of the image classification experiments are shown in Table 1. In Figure 1, as an example, we plot the histogram of softmax probabilities using CIFAR-10 as Din and Places365 as Dtestout . The detailed description of the image datasets used in the image OOD detection experiments is presented in Appendix A.2. Network Architecture and Training Details. Similar to Hendrycks et al. (2019), for CIFAR 10 and CIFAR 100 experiments, we used 40-2 wide residual networks (WRNs) proposed by Zagoruyko & Komodakis (2016). We initially trained the WRN for 100 epochs using a cosine learning rate (Loshchilov & Hutter, 2017) with an initial value 0.1, a dropout rate of 0.3 and a batch size of 128. As in Hendrycks et al. (2019), we also used Nesterov momentum and l2 weight regularization with a decay factor of 0.0005. For CIFAR 10, we fine-tuned the network for 15 epochs minimizing the loss function given by (3) using a learning rate of 0.001, while for the CIFAR 100 the corresponding number of epochs was 20. For the SVHN experiments, we trained 16-4 WRNs using a learning rate of 0.01, a dropout rate of 0.4 and a batch size of 128. We then fine-tuned the network for 5 epochs using a learning rate of 0.001. During fine-tuning, the 80 Million Tiny Images dataset was used as DOEout . The values of the hyperparameters λ1 and λ2 were chosen in the range [0.03, 0.09] using a separate validation datasetDvalout similar to Hendrycks et al. (2019). We note that Dvalout and D test out are disjoint. The data used for validation are presented in Appendix A.3. Contribution of each regularization term. To demonstrate the effect of each regularization term of the loss function described by (3) in the OOD detection task, we ran some additional image classification experiments which are presented in Table 2. For these experiments, we incrementally added each regularization term to the loss function described by (3) and we measured its effect both in the OOD detection evaluation metrics as well as in the accuracy of the DNN on the test images of Din. The results of these experiments validate that the combination of the two regularization terms of (3) not only improves the OOD detection performance of the DNN but also improves its accuracy on the test examples of Din compared to the case where λ1 = 0. Table 2 also demonstrates that our method can significantly improve the OOD detection performance of the DNN compared to the case where only the cross-entropy loss is minimized at the expense of only an insignificant degradation in the test accuracy of the DNN on examples generated by Din. 4.1.3 TEXT CLASSIFICATION EXPERIMENTS Results. The results of the text classification experiments are shown in Table 3. The detailed description of the text datasets used in the NLP OOD detection experiments is presented in Appendix B.1. Network Architecture and Training Details. For all text classification experiments, similar to Hendrycks et al. (2019), we train 2-layer GRUs (Cho et al., 2014) for 5 epochs with learning rate 0.01 and a batch size of 64 and then we fine-tune them for 2 epochs using the loss function given by (3). During fine-tuning, the WikiText-2 dataset was used asDOEout . The values of the hyperparameters λ1 and λ2 were chosen in the range [0.04, 0.1] using a separate validation dataset as described in Appendix B.2. 4.2 A COMBINATION OF OE AND DBPT METHODS FOR OOD DETECTION Lee et al. (2018b) proposed a DBPT method for OOD detection that can be applied to any pretrained softmax neural classifier. Under the assumption that the pre-trained features of a DNN can be fitted well by a class-conditional Gaussian distribution, they defined the confidence score using the Mahalanobis distance with respect to the closest class-conditional probability distribution, where its parameters are chosen as empirical class means and tied empirical covariance of training samples (Lee et al., 2018b). To further distinguish in- and out-of-distribution examples, they proposed two additional techniques. In the first technique, they added a small perturbation before processing each input example to increase the confidence score of their method. In the second technique, they proposed a feature ensemble method in order to obtain a better calibrated score. The feature ensemble method extracts all the hidden features of the DNN and computes their empirical class mean and tied covariances. Subsequently, it calculates the Mahalanobis distance-based confidence score for each layer and finally calculates the weighted average of these scores by training a logistic regression detector using validation samples in order to calculate the weight of each layer at the final confidence score. Since the Mahalanobis distance-based classifier proposed by Lee et al. (2018b) is a post-training method, it can be combined with our proposed loss function described by (3). More specifically, in our experiments, we initially trained a DNN using the standard cross entropy loss function and then we fine-tuned it with the proposed loss function given by (3). After fine-tuning, we applied the Mahalanobis distance-based classifier and we compared the obtained results against the results presented in Lee et al. (2018b). The simulation experiments on image classification tasks show that the combination of our method which belongs to the OE “family” of methods and the Mahalanobis distance-based classifier which belongs to the “family” of DBPT methods achieves state-of-the-art results in the OOD detection task. A part of our experiments was based on the publicly available code of Lee et al. (2018b). 4.2.1 EVALUATION METRICS To demonstrate the adaptability of our method, in these experiments, we adopt the OOD detection evaluation metrics used in Lee et al. (2018b). • True Negative Rate at N% True Positive Rate (TNRN): This performance metric measures the capability of an OOD detector to detect true negative examples when the true positive rate is set to 95%. • Area Under the Receiver Operating Characteristic curve (AUROC): In the out-of-distribution detection task, the ROC curve (Davis & Goadrich, 2006) summarizes the performance of an OOD detection method for varying threshold values. • Detection Accuracy (DAcc): As also mentioned in Lee et al. (2018b), this evaluation metric corresponds to the maximum classification probability over all possible thresholds : 1−min {Din(q(x) ≤ )P (x is from Din) +Dout(q(x) > )P (x is from Dout)}, where q(x) is a confidence score. Similar to Lee et al. (2018b), we assume that P (x is from Din) = P (x is from Dout). 4.2.2 EXPERIMENTAL SETUP To demonstrate the adaptability and the effectiveness of our method, we adopt the experimental setup of Lee et al. (2018b). We train ResNet (He et al., 2016) with 34 layers using CIFAR-10, CIFAR-100 and SVHN datasets asDin. For the CIFAR experiments, SVHN, TinyImageNet (a sample of 10,000 images drawn from the ImageNet dataset) and LSUN are used asDtestout . For the SVHN experiments, CIFAR-10, TinyImageNet and LSUN are used as Dtestout . Both TinyImageNet and LSUN images are downsampled to 32× 32. Similar to Lee et al. (2018b), for the Mahalanobis distance-based classifier, we train the ResNet model for 200 epochs with batch size 128 by minimizing the cross entropy loss using the SGD algorithm with momentum 0.9. The learning rate starts at 0.1 and is dropped by a factor of 10 at 50% and 75% of the training progress, respectively. Subsequently, we compute the Mahalanobis distancebased confidence score using both the input pre-processing and the feature ensemble techniques. The hyper-parameters that need to be tuned are the magnitude of the noise added at each test input example as well as the layer indexes for feature ensemble. Similar to Lee et al. (2018b), both of them are tuned using a separate validation dataset consisting of both in- and out-of-distribution data. Since the Mahalanobis distance-based classifier belongs to the “family” of DBPT methods for OOD detection tasks, it can be combined with our proposed method. More specifically, we initially train the ResNet model with 34 layers for 200 epochs using exactly the same training details as mentioned above. Subsequently, we fine-tune the network with the proposed loss function described by (3) using the 80 Million Tiny Images as DOEout . During fine-tuning, we use the SGD algorithm with momentum 0.9 and a cosine learning rate (Loshchilov & Hutter, 2017) with an initial value 0.001 using a batch size of 128 for data sampled from Din and a batch size of 256 for data sampled from DOEout . For CIFAR-10 and 100 experiments, we fine-tuned the network for 30 and 20 epochs respectively, while for SVHN the corresponding number of epochs was 5. The values of the hyperparameters λ1 and λ2 were chosen using a separate validation dataset consisting of both in- and out-of-distribution images similar to Lee et al. (2018b). The results are shown in Table 4. Discussion. The results in Table 4 demonstrate the effectiveness of our method when combined with the Mahalanobis distance-based classifier since it outperforms the original version of the Mahalanobis method proposed by Lee et al. (2018b) in all of the experiments. This result validates the contribution of our technique further, since it does not only achieve state-of-the-art results in OOD detection with OE, but it can be additionally combined with DBPT methods like the Mahalanobis distance-based classifier to achieve state-of-the-art results in the OOD detection task. The superior performance of our method when combined with the Mahalanobis distance-based classifier can be justified by the fact that the latter extracts the learned features from the layer(s) of the DNN and it subsequently uses those features to define a confidence score based on the Mahalanobis distance. The simulation results presented in Table 1 and Table 3 showed that our method can teach the DNN to learn feature representations that can further distinguish in- and out-of distribution data and therefore, the combination of the two methods improves the OOD detection capability of a DNN. 5 CONCLUSION In this paper, we proposed a method for simultaneous classification and out-of-distribution detection. The proposed loss function includes two regularization terms where the first minimizes the l1 norm between the output distribution of the softmax layer of a DNN and the uniform distribution, while the second minimizes the Euclidean distance between the training accuracy of a DNN and its average confidence in its predictions on the training set. Experimental results showed that the proposed loss function achieves state-of-the-art results in OOD detection with OE (Hendrycks et al., 2019) in both image and text classification tasks. Additionally, we experimentally showed that our method can be combined with DBPT methods for OOD detection like the Mahalanobis distance-based classifier (Lee et al., 2018b) and achieves state-of-the-art results in the OOD detection task. A EXPANDED IMAGE OOD DETECTION RESULTS AND DATASETS USED FOR COMPARISON WITH STATE-OF-THE-ART IN OE A.1 IMAGE OOD DETECTION RESULTS A.2 Din , DOEout AND D test out FOR IMAGE EXPERIMENTS SVHN: The Street View House Number (SVHN) dataset (Netzer et al., 2011) consists of 32 × 32 color images out of which 604,388 are used for training and 26,032 are used for testing. The dataset has 10 classes and was collected from real Google Street View images. Similar to Hendrycks et al. (2019), we rescale the pixels of the images to be in [0, 1]. CIFAR 10: This dataset (Krizhevsky & Hinton, 2009) contains 10 classes and consists of 60,000 32 × 32 color images out of which 50,000 belong to the training and 10,000 belong to the test set. Before training, we standardize the images per channel similar to Hendrycks et al. (2019). CIFAR 100: This dataset (Krizhevsky & Hinton, 2009) consists of 20 distinct superclasses each of which contains 5 different classes giving us a total of 100 classes. The total number of images in the dataset are 60,000 and we use the standard 50,000/10,000 train/test split. Before training, we standardize the images per channel similar to Hendrycks et al. (2019). 80 Million Tiny Images: The 80 Million Tiny Images dataset (Torralba et al., 2008) was exclusively used in our experiments in order to represent DOEout . It consists of 32 × 32 color images collected from the Internet. Similar to Hendrycks et al. (2019), in order to make sure that DOEout and D test out are disjoint, we removed all the images of the dataset that appear on CIFAR 10 and CIFAR 100 datasets. Places365: Places365 dataset introduced by Zhou et al. (2018) was exclusively used in our experiments in order to represent Dtestout . It consists of millions of photographs of scenes. Gaussian: A synthetic image dataset created by i.i.d. sampling from an isotropic Gaussian distribution. Bernoulli: A synthetic image dataset created by sampling from a Bernoulli distribution. Blobs: A synthetic dataset of images with definite edges. Icons-50: This dataset intoduced by Hendrycks & Dietterich (2018) consists of 10,000 images belonging to 50 classes of icons. As part of preprocessing, we removed the class “Number” in order to make it disjoint from the SVHN dataset. Textures: This dataset contains 5,640 textural images (Cimpoi et al., 2014). LSUN: It consists of around 1 million large-scale images of scenes (Yu et al., 2015). Rademacher: A synthetic image dataset created by sampling from a symmetric Rademacher distribution. A.3 VALIDATION DATA FOR IMAGE EXPERIMENTS Uniform Noise: A synthetic image dataset where each pixel is sampled from U [0, 1] or U [−1, 1] depending on the input space of the classifier. Arithmetic Mean: A synthetic image dataset created by randomly sampling a pair of in-distribution images and subsequently taking their pixelwise arithmetic mean. Geometric Mean: A synthetic image dataset created by randomly sampling a pair of in-distribution images and subsequently taking their pixelwise geometric mean. Jigsaw: A synthetic image dataset created by partitioning an image sampled from Din into 16 equally sized patches and by subsequently permuting those patches. Speckle Noised: A synthetic image dataset created by applying speckle noise to images sampled from Din. Inverted Images: A synthetic image dataset created by shifting and reordering the color channels of images sampled from Din. RGB Ghosted: A synthetic image dataset created by inverting the color channels of images sampled from Din. B EXPANDED TEXT OOD DETECTION RESULTS AND DATASETS USED FOR COMPARISON WITH STATE-OF-THE-ART IN OE B.1 Din , DOEout AND D test out FOR NLP EXPERIMENTS 20 Newsgroups: This dataset contains 20 different newsgroups, each corresponding to a specific topic. It contains around 19,000 examples and we used the standard 60/40 train/test split. TREC: A question classification dataset containing around 6,000 examples from 50 different classes. Similar to Hendrycks et al. (2019), we used 500 examples for the test phase and the rest for training. SST: The Stanford Sentiment Treebank (Socher et al., 2013) is a binary classification dataset for sentiment prediction of movie reviews containing around 10,000 examples. WikiText-2: This dataset contains over 2 million articles from Wikipedia and is exclusively used as DOEout in our experiments. We used the same preprocessing as in Hendrycks et al. (2019) in order to have a valid comparison. SNLI: The Stanford Natural Language Inference (SNLI) corpus is a collection of 570,000 humanwritten English sentence pairs (Bowman et al., 2015). IMDB: A sentiment classification dataset containing movies reviews. Multi30K: A dataset of English and German descriptions of images (Elliott et al., 2016). For our experiments, only the English descriptions were used. WMT16: A dataset used for machine translation tasks. For our experiments, only the English part of the test set was used. Yelp: A dataset containing reviews of users for businesses on Yelp. EWT: The English Web Treebank (EWT) consists of 5 different datasets: weblogs (EWT-W), newsgroups (EWT-N), emails (EWT-E), reviews (EWT-R) and questions-answers (EWT-A). B.2 VALIDATION DATA FOR NLP EXPERIMENTS The validation dataset Dvalout used for the NLP OOD detection experiments was constructed as follows. For each Din dataset used, we used the rest two in-distribution datasets as Dvalout. For instance, during the experiments where 20 Newsgroups represented Din, we used TREC and SST as Dvalout making sure that Dvalout and D test out are disjoint. B.3 TEXT OOD DETECTION RESULTS C TRAINING DETAILS FOR THE EXPERIMENTAL RESULTS FOR COMPARISON WITH MAHALANOBIS DISTANCE-BASED CLASSIFIER During fine-tuning with our proposed loss function given by (3), we used the training details presented in Table 7. The values of the hyper-parameters λ1 and λ2 were chosen using a separate validation dataset consisting of both in- and out-of-distribution images similar to Lee et al. (2018b).
1. What are the proposed modifications to the Outlier Exposure (OE) technique? 2. How do the experimental results demonstrate the effectiveness of the proposed method compared to standard OOD benchmarks? 3. Why does the reviewer suggest comparing the proposed method with alternative methods such as MD and temperature scaling? 4. What is the purpose of including an ablation study in the paper? 5. How do lambda1 and lambda2 hyperparameters impact model performance? 6. How sensitive is model performance to the setting of the lambda hyperparameters? 7. Have the authors evaluated the method's performance in detecting adversarial attacks?
Review
Review This paper proposes to tackle the problems of out-of-distribution (OOD) detection and model calibration by adapting the loss function of the Outlier Exposure (OE) technique [1]. In OE, model softmax outputs are encouraged to be uniform for OOD samples, which is enforced through the use of a KL divergence loss function. The first proposed modification in this paper is to replace KL divergence term with an L1 penalty. The second change is the addition of an L2 penalty between the maximum softmax probability and the model accuracy. Experimental results demonstrate that adding these two components increases performance over OE on standard OOD benchmarks for both vision and text domains, and also improves model calibration. Although this paper presents some good quantitative results, I tend towards rejection in its current state. This is mainly due to the limited comparison to alternative methods, and the lack of ablation study. If these were addressed I would consider increasing my score. Things to improve the paper: 1) Currently, one of the most commonly used benchmark methods for OOD detection is the Mahalanobis distance based confidence score (MD) [2], which, as far as I am aware, is state-of-the-art among published works. The authors claim that they do not compare to this work because it is a post-training method, and, presumably, the techniques should be doubly effective when combined. However, we do not have any proof that this is actually the case. Therefore, I think it is important to verify that the two techniques are indeed compatible, and if not, then direct comparison with MD would still be necessary. 2) In the case of confidence calibration, there is no comparison made with other calibration techniques, such as temperature scaling [3]. I think it would be good to included these for reference. 3) Since two distinct components are being added to the loss function, I think it is important to include an ablation study to identify how much each component contributes to improvements in OOD detection and confidence calibration. Minor things to improve the paper that did not impact the score: 4) With regards to the confidence calibration loss, there is similar work by [4] which also optimizes the output of the model to make sure confidence predictions are close to the true accuracy. It may be worth citing if you think it is relevant. Additional questions: 5) How are lambda1 and lambda2 tuned? I could not find this information in the paper. 6) How sensitive is model performance to the setting of the lambda hyperparameters? It would be nice to see a plot of lambda versus the OOD detection metrics. 7) Have you evaluated how this method performs at detecting adversarial attacks? I do not think the paper will suffer without these results, but they are certainly relevant and of interest to practitioners in this area. References: [1] Hendrycks, Dan, Mantas Mazeika, and Thomas G. Dietterich. "Deep anomaly detection with outlier exposure." ICLR (2018). [2] Lee, Kimin, Kibok Lee, Honglak Lee, and Jinwoo Shin. "A simple unified framework for detecting out-of-distribution samples and adversarial attacks." NeurIPS, (2018). [3] Guo, Chuan, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. "On calibration of modern neural networks." In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1321-1330. JMLR. org, 2017. [4] Corbière, Charles, Nicolas Thome, Avner Bar-Hen, Matthieu Cord, and Patrick Pérez. "Addressing Failure Prediction by Learning Model Confidence." NeurIPS (2019). ### Post-Rebuttal Comments ### I would like to thank the authors for their hard work during the rebuttal period. I think the current version of the paper is much improved over the previous version. The choice to remove the claims about calibration definitely improves the focus of the paper. The addition of the Mahalanobis distance experiments and the ablation study also significantly strengthen the paper. However, as the other reviewers have pointed out, the novelty of the paper is quite limited since the majority of the gains come from simply swapping the KL-divergence penalty with an L1 penalty. Despite simplicity, this single change yields a significant improvement in performance for the OE algorithm, which is noteworthy. As a result, I will increase my score from a weak reject (3) to a very, very weak accept (more like a 5 than a 6).
ICLR
Title Simultaneous Classification and Out-of-Distribution Detection Using Deep Neural Networks Abstract Deep neural networks have achieved great success in classification tasks during the last years. However, one major problem to the path towards artificial intelligence is the inability of neural networks to accurately detect samples from novel class distributions and therefore, most of the existent classification algorithms assume that all classes are known prior to the training stage. In this work, we propose a methodology for training a neural network that allows it to efficiently detect outof-distribution (OOD) examples without compromising much of its classification accuracy on the test examples from known classes. Based on the Outlier Exposure (OE) technique, we propose a novel loss function that achieves state-of-the-art results in out-of-distribution detection with OE both on image and text classification tasks. Additionally, we experimentally show that the combination of our method with the Mahalanobis distance-based classifier achieves state-of-the-art results in the OOD detection task. 1 INTRODUCTION Modern neural networks have recently achieved superior results in classification problems (Krizhevsky et al., 2012; He et al., 2016). However, most of the classification algorithms proposed so far make the assumption that data generated from all the class conditional distributions are available during training time i.e., they make the closed-world assumption. In an open world environment (Bendale & Boult, 2015), where examples from novel class distributions might appear during test time, it is necessary to build classifiers that are able to detect OOD examples while having high classification accuracy on known class distributions. It is generally known that deep neural networks can make predictions for out-of-distribution (OOD) examples with high confidence (Nguyen et al., 2015). High confidence predictions are undesirable since they consist a symptom of overfitting (Szegedy et al., 2015). They also make the calibration of neural networks difficult. Guo et al. (2017) observed that modern neural networks are miscalibrated by experimentally showing that the average confidence of deep neural networks is usually much higher than their accuracy. A simple yet effective method to address the problem of the inability of neural networks to detect OOD examples is to train them so that they make highly uncertain predictions for examples generated by novel class distributions. In order to achieve that, Lee et al. (2018a) defined a loss function based on the Kullback-Leibler (KL) divergence metric to minimize the distance between the output distribution given by softmax and the uniform distribution for samples generated by a GAN (Goodfellow et al., 2014). Using a similar loss function, Hendrycks et al. (2019) showed that the technique of Outlier Exposure (OE) that draws anomalies from a real and diverse dataset can outperform the GAN framework for OOD detection. Using the OE technique, our main contribution is threefold: • We propose a novel loss function consisting of two regularization terms. The first regularization term minimizes the l1 norm between the output distribution given by softmax and the uniform distribution which constitutes a distance metric between the two distributions (Deza & Deza, 2009). The second regularization term minimizes the Euclidean distance between the training accuracy of a DNN and its average confidence in its predictions on the training set. • We experimentally show that the proposed loss function outperforms the previous work of Hendrycks et al. (2019) and achieves state-of-the-art results in OOD detection with OE both on image and text classification tasks. • We experimentally show that our proposed method can be combined with the Mahalanobis distance-based classifier (Lee et al., 2018b). The combination of the two methods outperforms the original Mahalanobis method in all of the experiments and to the best of our knowledge, achieves state-of-the-art results in the OOD detection task. 2 RELATED WORK Yu et al. (2017) used the GAN framework (Goodfellow et al., 2014) to generate negative instances of seen classes by finding data points that are close to the training instances but are classified as fake by the discriminator. Then, they used those samples in order to train SVM classifiers to detect examples from unseen classes. Similarly, Kliger & Fleishman (2018) used a multi-class GAN framework in order to produce a generator that generates a mixture of nominal data and novel data and a discriminator that performs simultaneous classification and novelty detection. Hendrycks & Gimpel (2017) proposed a baseline for detecting misclassified and out-of-distibution examples based on their observation that the prediction probability of out-of-distribution examples tends to be lower than the prediction probability for correct examples. Recently, Corbière et al. (2019) also studied the problem of detecting overconfident incorrect predictions. A single-parameter variant of Platt scaling (Platt, 1999), temperature scaling, was proposed by Guo et al. (2017) for calibration of modern neural networks. For image data, based on the idea of Hendrycks & Gimpel (2017), Liang et al. (2018) observed that simultaneous use of temperature scaling and small perturbations at the input can push the softmax scores of in- and out-of-distribution images further apart from each other, making the out-of-distribution images distinguishable. Lee et al. (2018a) generated GAN examples and forced the neural network to have lower confidence in predicting their classes. Hendrycks et al. (2019) substituted the GAN samples with a real and diverse dataset using the technique of OE. Similar works (Malinin & Gales, 2018; Bevandić et al., 2018) also force the model to make uncertain predictions for OOD examples. Using an ensemble of classifiers, Lakshminarayanan et al. (2017) showed that their method was able to express higher uncertainty in OOD examples. Liu et al. (2018) provided theoretical guarantees for detecting OOD examples under the assumption that an upper bound of the fraction of OOD examples is available. Under the assumption that the pre-trained features of a softmax neural classifier can be fitted well by a class-conditional Gaussian distribution, Lee et al. (2018b) defined a confidence score using the Mahalanobis distance that can efficiently detect abnormal test samples. As also mentioned by Lee et al. (2018b), Euclidean distance can also be used but with less efficiency. We prefer to call these methods Distance-Based Post-Training (DBPT) methods for OOD detection. 3 SIMULTANEOUS CLASSIFICATION AND OUT-OF-DISTRIBUTION DETECTION We consider the multi-class classification problem under the open-world assumption (Bendale & Boult, 2015), where samples from some classes are not available during training. Our task is to design deep neural network classifiers that can achieve high accuracy on examples generated by a learned probability distribution called Din while at the same time, they can effectively detect examples generated by a different probability distribution calledDout during the test phase. The examples generated by Din are called in-distribution while the examples generated by Dout are called outof-distribution (OOD). Adopting the idea of Outlier Exposure (OE) proposed by Hendrycks et al. (2019), we train the neural network using training examples sampled from Din and DOEout . During the test phase, we evaluate the OOD detection capability of the neural network using examples sampled from Dtestout , where D OE out and D test out are disjoint. Lee et al. (2018a) and Hendrycks et al. (2019) used the KL divergence metric in order to minimize the distance between the output distribution produced by softmax for the OOD examples and the uniform distribution. In our work, we choose to minimize the l1 norm between the two distributions which has shown great success in machine learning applications. Viewing the knowledge of a model as the class conditional distribution it produces over outputs given an input (Hinton et al., 2015), the entropy of this conditional distribution can be used as a regularization method that penalizes confident predictions of a neural network (Pereyra et al., 2017). In our approach, instead of penalizing the confident predictions of posterior probabilities yielded by a neural network, we force it to make predictions for examples generated by Din with an average confidence close to its training accuracy. In such a manner, not only do we make the neural network avoid making overconfident predictions, but we also take into consideration its calibration (Guo et al., 2017). Let us consider a classification model that can be represented by a parametrized function fθ, where θ stands for the vector of parameters in fθ. Without loss of generality, assume that the cross entropy loss function is used during training. We propose the following constrained optimization problem for finding θ: minimize θ E(x,y)∼Din [LCE(fθ(x), y)] subject to Ex∼Din [ max l=1,...,K ( ezl∑K j=1 e zj )] = Atr max l=1,...,K ( ezl∑K j=1 e zj ) = 1 K , ∀x(i) ∼ DOEout (1) where LCE is the cross entropy loss function and K is the number of classes available in Din. Even though the constrained optimization problem (1) can be used for training various classification models, for clarity we limit our discussion to deep neural networks. Let z denote the vector representation of the example x(i) in the feature space produced by the last layer of the deep neural network (DNN) and let Atr be the training accuracy of the DNN. Observe that the optimization problem (1) minimizes the cross entropy loss function subject to two additional constraints. The first constraint forces the average maximum prediction probabilities calculated by the softmax layer towards the training accuracy of the DNN for examples sampled from Din, while the second constraint forces the maximum probability calculated by the softmax layer towards 1K for all examples sampled from the probability distribution DOEout . In other words, the first constraint makes the DNN predict examples from known classes with an average confidence close to its training accuracy, while the second constraint forces the DNN to be highly uncertain for examples of classes it has never seen before by producing a uniform distribution at the output for examples sampled from the probability distribution DOEout . It is also worth noting that the first constraint of (1) uses the training accuracy of the neural network Atr which is not available in general. To handle this issue, one can train a neural network by only minimizing the cross entropy loss function for a few number of epochs in order to calculate Atr and then fine-tune it using (1). Because solving the nonconvex constrained optimization problem described by (1) is extremely difficult, let us introduce Lagrange multipliers (Boyd & Vandenberghe, 2004) and convert it into the following unconstrained optimization problem: minimize θ E(x,y)∼Din [LCE(fθ(x), y)] + λ1 ( Atr − Ex∼Din [ max l=1,...,K ( ezl∑K j=1 e zj )]) + λ2 ∑ x(i)∼DOEout ( 1 K − max l=1,...,K ( ezl∑K j=1 e zj )) (2) where it is worth mentioning that in (2), we used only one Lagrange multiplier for the second set of constraints in (1) instead of using one for each constraint in order to avoid introducing a large number of hyperparameters to our loss function. This modification is a special case where we consider the Lagrange multiplier λ2 to be common for each individual constraint involving a different x(i) ∼ DOEout . Note also that according to the original Lagrangian theory, one should optimize the objective function of (2) both with respect to θ,λ1 and λ2 but as it commonly happens in machine learning applications, we approximate the original problem by calculating appropriate values for λ1 and λ2 through a validation technique (Hastie et al., 2001). After converting the constrained optimization problem (1) into an unconstrained optimization problem as described by (2), it is possible that at each training epoch, the maximum prediction probability produced by softmax for each example drawn from DOEout changes, introducing difficulties in making the DNN produce a uniform distribution at the output for those examples. For instance, assume that we have a K-class classifier with K = 3 and at epoch tn, the maximum prediction probability produced by softmax for an example x(i) ∼ DOEout corresponds to the second class. Then, the last term of (2) will push the prediction probability of example x(i) for the second class towards 13 while concurrently increasing the prediction probabilities for either the first class or the third class or both. At the next epoch tn+1, it is possible that the prediction probability for either the first class or the third class becomes the maximum among the three and hence, the last term of (2) will push that one towards 13 by possibly increasing again the prediction probability for the second class. It becomes obvious that this process introduces difficulties in making the DNN produce a uniform distribution at the output for examples sampled from DOEout . However, this issue can be resolved by concurrently pushing all the prediction probabilities produced by the softmax layer for examples drawn from DOEout towards 1 K . Additionally, in order to prevent the second and the third term of (2) from taking negative values during training, let us convert (2) into the following: minimize θ E(x,y)∼Din [LCE(fθ(x), y)] + λ1 ( Atr − Ex∼Din [ max l=1,...,K ( ezl∑K j=1 e zj )])2 + λ2 ∑ x(i)∼DOEout K∑ l=1 ∣∣∣∣∣ 1K − ezl∑K j=1 e zj ∣∣∣∣∣ (3) The second term of the the loss function described by (3) minimizes the squared distance between the training accuracy of the DNN and the average confidence in its predictions for examples drawn fromDin. Additionally, the third term of (3) minimizes the l1 norm between the uniform distribution and the distribution produced by the softmax layer for the examples drawn from DOEout . While converting the unconstrained optimization problem (2) into (3), one could use several combinations of norms to minimize. However, we found that minimizing the squared distance between the training accuracy of the DNN and the average confidence in its predictions for examples drawn from Din and the l1 norm between the uniform distribution and the distribution produced by the softmax layer for the examples drawn from DOEout works best. This is because l1 norm uniformly attracts all the prediction probabilities produced by softmax to the desired value 1K , better contributing to producing a uniform distribution at the output of the DNN for the examples drawn from DOEout . On the other hand, minimizing the squared distance between the training accuracy of the DNN and the average confidence in its predictions for examples drawn from Din emphasizes more on attracting the maximum softmax probabilities that are further away from the average confidence of the DNN, making the neural network better detect in- and out-of-distribution examples at the low softmax probability levels. 4 EXPERIMENTS During the experiments, we observed that if we start training the DNN with a relatively high value of λ1, the learning process might slow down since we constantly force the neural network to make predictions with an average confidence close to its training accuracy. Therefore, it is recommended to split the training of the algorithm into two stages where in the first stage, we train the DNN using only the cross entropy loss function until it reaches the desired level of accuracy Atr and then using a fixed Atr, we fine-tune it using the combined loss function given by (3). 4.1 COMPARISON WITH STATE-OF-THE-ART IN OE The experimental setting is as follows. We draw samples from Din and we train the DNN until it reaches the desired level of accuracy Atr. Then, drawing samples from DOEout , we fine-tune it using the combined loss function given by (3). During the test phase, we evaluate the OOD detection capability of the DNN using examples from Dtestout which is disjoint from D OE out . We demonstrate the effectiveness of our method in both image and text classification tasks by comparing it with the previous OOD detection with OE method proposed by Hendrycks et al. (2019). A part of our experiments was based on the publicly available code of Hendrycks et al. (2019). 4.1.1 EVALUATION METRICS Our OOD detection method belongs to the class of Maximum Softmax Probability (MSP) detectors (Hendrycks & Gimpel, 2017) and therefore, we adopt the evaluation metrics used in Hendrycks et al. (2019). Defining the OOD examples as the positive class and the in-distribution examples as the negative class, the performance metrics associated with OOD detection are the following: • False Positive Rate at N% True Positive Rate (FPRN): This performance metric (Balntas et al., 2016; Kumar et al., 2016) measures the capability of an OOD detector when the maximum softmax probability threshold is set to a predefined value. More specifically, assuming N% of OOD examples need to be detected during the test phase, we calculate a threshold in the softmax probability space and given that threshold, we measure the false positive rate, i.e. the ratio of indistribution examples that are incorrectly classified as OOD. • Area Under the Receiver Operating Characteristic curve (AUROC): In the out-of-distribution detection task, the ROC curve (Davis & Goadrich, 2006) summarizes the performance of an OOD detection method for varying threshold values. • Area Under the Precision-Recall curve (AUPR): The AUPR (Manning & Schütze, 1999) is an important measure when there exists a class-imbalance between OOD and in-distribution examples in a dataset. As in Hendrycks et al. (2019), in our experiments, the ratio of OOD and in-distribution test examples is 1:5. 4.1.2 IMAGE CLASSIFICATION EXPERIMENTS Results. The results of the image classification experiments are shown in Table 1. In Figure 1, as an example, we plot the histogram of softmax probabilities using CIFAR-10 as Din and Places365 as Dtestout . The detailed description of the image datasets used in the image OOD detection experiments is presented in Appendix A.2. Network Architecture and Training Details. Similar to Hendrycks et al. (2019), for CIFAR 10 and CIFAR 100 experiments, we used 40-2 wide residual networks (WRNs) proposed by Zagoruyko & Komodakis (2016). We initially trained the WRN for 100 epochs using a cosine learning rate (Loshchilov & Hutter, 2017) with an initial value 0.1, a dropout rate of 0.3 and a batch size of 128. As in Hendrycks et al. (2019), we also used Nesterov momentum and l2 weight regularization with a decay factor of 0.0005. For CIFAR 10, we fine-tuned the network for 15 epochs minimizing the loss function given by (3) using a learning rate of 0.001, while for the CIFAR 100 the corresponding number of epochs was 20. For the SVHN experiments, we trained 16-4 WRNs using a learning rate of 0.01, a dropout rate of 0.4 and a batch size of 128. We then fine-tuned the network for 5 epochs using a learning rate of 0.001. During fine-tuning, the 80 Million Tiny Images dataset was used as DOEout . The values of the hyperparameters λ1 and λ2 were chosen in the range [0.03, 0.09] using a separate validation datasetDvalout similar to Hendrycks et al. (2019). We note that Dvalout and D test out are disjoint. The data used for validation are presented in Appendix A.3. Contribution of each regularization term. To demonstrate the effect of each regularization term of the loss function described by (3) in the OOD detection task, we ran some additional image classification experiments which are presented in Table 2. For these experiments, we incrementally added each regularization term to the loss function described by (3) and we measured its effect both in the OOD detection evaluation metrics as well as in the accuracy of the DNN on the test images of Din. The results of these experiments validate that the combination of the two regularization terms of (3) not only improves the OOD detection performance of the DNN but also improves its accuracy on the test examples of Din compared to the case where λ1 = 0. Table 2 also demonstrates that our method can significantly improve the OOD detection performance of the DNN compared to the case where only the cross-entropy loss is minimized at the expense of only an insignificant degradation in the test accuracy of the DNN on examples generated by Din. 4.1.3 TEXT CLASSIFICATION EXPERIMENTS Results. The results of the text classification experiments are shown in Table 3. The detailed description of the text datasets used in the NLP OOD detection experiments is presented in Appendix B.1. Network Architecture and Training Details. For all text classification experiments, similar to Hendrycks et al. (2019), we train 2-layer GRUs (Cho et al., 2014) for 5 epochs with learning rate 0.01 and a batch size of 64 and then we fine-tune them for 2 epochs using the loss function given by (3). During fine-tuning, the WikiText-2 dataset was used asDOEout . The values of the hyperparameters λ1 and λ2 were chosen in the range [0.04, 0.1] using a separate validation dataset as described in Appendix B.2. 4.2 A COMBINATION OF OE AND DBPT METHODS FOR OOD DETECTION Lee et al. (2018b) proposed a DBPT method for OOD detection that can be applied to any pretrained softmax neural classifier. Under the assumption that the pre-trained features of a DNN can be fitted well by a class-conditional Gaussian distribution, they defined the confidence score using the Mahalanobis distance with respect to the closest class-conditional probability distribution, where its parameters are chosen as empirical class means and tied empirical covariance of training samples (Lee et al., 2018b). To further distinguish in- and out-of-distribution examples, they proposed two additional techniques. In the first technique, they added a small perturbation before processing each input example to increase the confidence score of their method. In the second technique, they proposed a feature ensemble method in order to obtain a better calibrated score. The feature ensemble method extracts all the hidden features of the DNN and computes their empirical class mean and tied covariances. Subsequently, it calculates the Mahalanobis distance-based confidence score for each layer and finally calculates the weighted average of these scores by training a logistic regression detector using validation samples in order to calculate the weight of each layer at the final confidence score. Since the Mahalanobis distance-based classifier proposed by Lee et al. (2018b) is a post-training method, it can be combined with our proposed loss function described by (3). More specifically, in our experiments, we initially trained a DNN using the standard cross entropy loss function and then we fine-tuned it with the proposed loss function given by (3). After fine-tuning, we applied the Mahalanobis distance-based classifier and we compared the obtained results against the results presented in Lee et al. (2018b). The simulation experiments on image classification tasks show that the combination of our method which belongs to the OE “family” of methods and the Mahalanobis distance-based classifier which belongs to the “family” of DBPT methods achieves state-of-the-art results in the OOD detection task. A part of our experiments was based on the publicly available code of Lee et al. (2018b). 4.2.1 EVALUATION METRICS To demonstrate the adaptability of our method, in these experiments, we adopt the OOD detection evaluation metrics used in Lee et al. (2018b). • True Negative Rate at N% True Positive Rate (TNRN): This performance metric measures the capability of an OOD detector to detect true negative examples when the true positive rate is set to 95%. • Area Under the Receiver Operating Characteristic curve (AUROC): In the out-of-distribution detection task, the ROC curve (Davis & Goadrich, 2006) summarizes the performance of an OOD detection method for varying threshold values. • Detection Accuracy (DAcc): As also mentioned in Lee et al. (2018b), this evaluation metric corresponds to the maximum classification probability over all possible thresholds : 1−min {Din(q(x) ≤ )P (x is from Din) +Dout(q(x) > )P (x is from Dout)}, where q(x) is a confidence score. Similar to Lee et al. (2018b), we assume that P (x is from Din) = P (x is from Dout). 4.2.2 EXPERIMENTAL SETUP To demonstrate the adaptability and the effectiveness of our method, we adopt the experimental setup of Lee et al. (2018b). We train ResNet (He et al., 2016) with 34 layers using CIFAR-10, CIFAR-100 and SVHN datasets asDin. For the CIFAR experiments, SVHN, TinyImageNet (a sample of 10,000 images drawn from the ImageNet dataset) and LSUN are used asDtestout . For the SVHN experiments, CIFAR-10, TinyImageNet and LSUN are used as Dtestout . Both TinyImageNet and LSUN images are downsampled to 32× 32. Similar to Lee et al. (2018b), for the Mahalanobis distance-based classifier, we train the ResNet model for 200 epochs with batch size 128 by minimizing the cross entropy loss using the SGD algorithm with momentum 0.9. The learning rate starts at 0.1 and is dropped by a factor of 10 at 50% and 75% of the training progress, respectively. Subsequently, we compute the Mahalanobis distancebased confidence score using both the input pre-processing and the feature ensemble techniques. The hyper-parameters that need to be tuned are the magnitude of the noise added at each test input example as well as the layer indexes for feature ensemble. Similar to Lee et al. (2018b), both of them are tuned using a separate validation dataset consisting of both in- and out-of-distribution data. Since the Mahalanobis distance-based classifier belongs to the “family” of DBPT methods for OOD detection tasks, it can be combined with our proposed method. More specifically, we initially train the ResNet model with 34 layers for 200 epochs using exactly the same training details as mentioned above. Subsequently, we fine-tune the network with the proposed loss function described by (3) using the 80 Million Tiny Images as DOEout . During fine-tuning, we use the SGD algorithm with momentum 0.9 and a cosine learning rate (Loshchilov & Hutter, 2017) with an initial value 0.001 using a batch size of 128 for data sampled from Din and a batch size of 256 for data sampled from DOEout . For CIFAR-10 and 100 experiments, we fine-tuned the network for 30 and 20 epochs respectively, while for SVHN the corresponding number of epochs was 5. The values of the hyperparameters λ1 and λ2 were chosen using a separate validation dataset consisting of both in- and out-of-distribution images similar to Lee et al. (2018b). The results are shown in Table 4. Discussion. The results in Table 4 demonstrate the effectiveness of our method when combined with the Mahalanobis distance-based classifier since it outperforms the original version of the Mahalanobis method proposed by Lee et al. (2018b) in all of the experiments. This result validates the contribution of our technique further, since it does not only achieve state-of-the-art results in OOD detection with OE, but it can be additionally combined with DBPT methods like the Mahalanobis distance-based classifier to achieve state-of-the-art results in the OOD detection task. The superior performance of our method when combined with the Mahalanobis distance-based classifier can be justified by the fact that the latter extracts the learned features from the layer(s) of the DNN and it subsequently uses those features to define a confidence score based on the Mahalanobis distance. The simulation results presented in Table 1 and Table 3 showed that our method can teach the DNN to learn feature representations that can further distinguish in- and out-of distribution data and therefore, the combination of the two methods improves the OOD detection capability of a DNN. 5 CONCLUSION In this paper, we proposed a method for simultaneous classification and out-of-distribution detection. The proposed loss function includes two regularization terms where the first minimizes the l1 norm between the output distribution of the softmax layer of a DNN and the uniform distribution, while the second minimizes the Euclidean distance between the training accuracy of a DNN and its average confidence in its predictions on the training set. Experimental results showed that the proposed loss function achieves state-of-the-art results in OOD detection with OE (Hendrycks et al., 2019) in both image and text classification tasks. Additionally, we experimentally showed that our method can be combined with DBPT methods for OOD detection like the Mahalanobis distance-based classifier (Lee et al., 2018b) and achieves state-of-the-art results in the OOD detection task. A EXPANDED IMAGE OOD DETECTION RESULTS AND DATASETS USED FOR COMPARISON WITH STATE-OF-THE-ART IN OE A.1 IMAGE OOD DETECTION RESULTS A.2 Din , DOEout AND D test out FOR IMAGE EXPERIMENTS SVHN: The Street View House Number (SVHN) dataset (Netzer et al., 2011) consists of 32 × 32 color images out of which 604,388 are used for training and 26,032 are used for testing. The dataset has 10 classes and was collected from real Google Street View images. Similar to Hendrycks et al. (2019), we rescale the pixels of the images to be in [0, 1]. CIFAR 10: This dataset (Krizhevsky & Hinton, 2009) contains 10 classes and consists of 60,000 32 × 32 color images out of which 50,000 belong to the training and 10,000 belong to the test set. Before training, we standardize the images per channel similar to Hendrycks et al. (2019). CIFAR 100: This dataset (Krizhevsky & Hinton, 2009) consists of 20 distinct superclasses each of which contains 5 different classes giving us a total of 100 classes. The total number of images in the dataset are 60,000 and we use the standard 50,000/10,000 train/test split. Before training, we standardize the images per channel similar to Hendrycks et al. (2019). 80 Million Tiny Images: The 80 Million Tiny Images dataset (Torralba et al., 2008) was exclusively used in our experiments in order to represent DOEout . It consists of 32 × 32 color images collected from the Internet. Similar to Hendrycks et al. (2019), in order to make sure that DOEout and D test out are disjoint, we removed all the images of the dataset that appear on CIFAR 10 and CIFAR 100 datasets. Places365: Places365 dataset introduced by Zhou et al. (2018) was exclusively used in our experiments in order to represent Dtestout . It consists of millions of photographs of scenes. Gaussian: A synthetic image dataset created by i.i.d. sampling from an isotropic Gaussian distribution. Bernoulli: A synthetic image dataset created by sampling from a Bernoulli distribution. Blobs: A synthetic dataset of images with definite edges. Icons-50: This dataset intoduced by Hendrycks & Dietterich (2018) consists of 10,000 images belonging to 50 classes of icons. As part of preprocessing, we removed the class “Number” in order to make it disjoint from the SVHN dataset. Textures: This dataset contains 5,640 textural images (Cimpoi et al., 2014). LSUN: It consists of around 1 million large-scale images of scenes (Yu et al., 2015). Rademacher: A synthetic image dataset created by sampling from a symmetric Rademacher distribution. A.3 VALIDATION DATA FOR IMAGE EXPERIMENTS Uniform Noise: A synthetic image dataset where each pixel is sampled from U [0, 1] or U [−1, 1] depending on the input space of the classifier. Arithmetic Mean: A synthetic image dataset created by randomly sampling a pair of in-distribution images and subsequently taking their pixelwise arithmetic mean. Geometric Mean: A synthetic image dataset created by randomly sampling a pair of in-distribution images and subsequently taking their pixelwise geometric mean. Jigsaw: A synthetic image dataset created by partitioning an image sampled from Din into 16 equally sized patches and by subsequently permuting those patches. Speckle Noised: A synthetic image dataset created by applying speckle noise to images sampled from Din. Inverted Images: A synthetic image dataset created by shifting and reordering the color channels of images sampled from Din. RGB Ghosted: A synthetic image dataset created by inverting the color channels of images sampled from Din. B EXPANDED TEXT OOD DETECTION RESULTS AND DATASETS USED FOR COMPARISON WITH STATE-OF-THE-ART IN OE B.1 Din , DOEout AND D test out FOR NLP EXPERIMENTS 20 Newsgroups: This dataset contains 20 different newsgroups, each corresponding to a specific topic. It contains around 19,000 examples and we used the standard 60/40 train/test split. TREC: A question classification dataset containing around 6,000 examples from 50 different classes. Similar to Hendrycks et al. (2019), we used 500 examples for the test phase and the rest for training. SST: The Stanford Sentiment Treebank (Socher et al., 2013) is a binary classification dataset for sentiment prediction of movie reviews containing around 10,000 examples. WikiText-2: This dataset contains over 2 million articles from Wikipedia and is exclusively used as DOEout in our experiments. We used the same preprocessing as in Hendrycks et al. (2019) in order to have a valid comparison. SNLI: The Stanford Natural Language Inference (SNLI) corpus is a collection of 570,000 humanwritten English sentence pairs (Bowman et al., 2015). IMDB: A sentiment classification dataset containing movies reviews. Multi30K: A dataset of English and German descriptions of images (Elliott et al., 2016). For our experiments, only the English descriptions were used. WMT16: A dataset used for machine translation tasks. For our experiments, only the English part of the test set was used. Yelp: A dataset containing reviews of users for businesses on Yelp. EWT: The English Web Treebank (EWT) consists of 5 different datasets: weblogs (EWT-W), newsgroups (EWT-N), emails (EWT-E), reviews (EWT-R) and questions-answers (EWT-A). B.2 VALIDATION DATA FOR NLP EXPERIMENTS The validation dataset Dvalout used for the NLP OOD detection experiments was constructed as follows. For each Din dataset used, we used the rest two in-distribution datasets as Dvalout. For instance, during the experiments where 20 Newsgroups represented Din, we used TREC and SST as Dvalout making sure that Dvalout and D test out are disjoint. B.3 TEXT OOD DETECTION RESULTS C TRAINING DETAILS FOR THE EXPERIMENTAL RESULTS FOR COMPARISON WITH MAHALANOBIS DISTANCE-BASED CLASSIFIER During fine-tuning with our proposed loss function given by (3), we used the training details presented in Table 7. The values of the hyper-parameters λ1 and λ2 were chosen using a separate validation dataset consisting of both in- and out-of-distribution images similar to Lee et al. (2018b).
1. What is the focus of the paper regarding Outlier Exposure? 2. What are the strengths and weaknesses of the proposed loss function in comparison to other methods? 3. How does the reviewer assess the contribution level and experimental support of the paper's claims?
Review
Review This work proposes a new loss function to train the network with Outlier Exposure(OE) [1] which leads to better OOD detection compared to simple loss function that uses KL divergence as the regularizer for OOD detection. The new loss function is the cross entropy plus two more regularizers which are : 1) Average ECE (Expected Calibration Error) function to calibrate the model and 2) absolute difference of the network output to $1/K$ where $K$ is the number of tasks. The second regularizer keeps the softmax output of the network uniform for the OE samples. They show adding these new regularizers to the cross-entropy loss function will improve the Out-distribution detection capability of networks more than OE method proposed in [1] and the baseline proposed in [2]. Pros: The paper is written clearly and the motivation of designed loss functions are explained well. Cons: 1- The level of contributions is limited. 2- The variety of comparison is not enough. The authors did not show how the approach is working in compared to the other OOD methods like ODIN[3] and the proposed method in [4]. 3- The experiments are not supporting the idea. First, the paper claims that the KL is not a good regularizer for OOD detection as it is not a distance metric. But there is no experiment or justification in the paper that supports why this claim is true. Then the second contribution claims that the calibration term that is added to the loss function improves the OOD detection as well as calibration in the network, but the experiments are not designed to show the impact of each regularizer term separately in improving the OOD detection rate. Figure 2 also does not depict any significant conclusion. It only shows that the new loss function makes the network more calibrated than the naive network. This phenomenon was reported before in [1]. It would be better if the paper investigated the relation between the calibration and OOD detection by designing more specific experiments for calibration section. Overall, I think the paper should be rejected as the contributions are limited and are not aligned with the experiments. References [1]Hendrycks, Dan, Mantas Mazeika, and Thomas G. Dietterich. "Deep Anomaly Detection with Outlier Exposure." arXiv preprint arXiv:1812.04606 (2018). [2] A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks, ICLR2016. [3] Liang, Shiyu, Yixuan Li, and Rayadurgam Srikant. "Enhancing the reliability of out-of-distribution image detection in neural networks." arXiv preprint arXiv:1706.02690 (2017). [4] Lee, Kimin, et al. "A simple unified framework for detecting out-of-distribution samples and adversarial attacks." Advances in Neural Information Processing Systems. 2018.
ICLR
Title Simultaneous Classification and Out-of-Distribution Detection Using Deep Neural Networks Abstract Deep neural networks have achieved great success in classification tasks during the last years. However, one major problem to the path towards artificial intelligence is the inability of neural networks to accurately detect samples from novel class distributions and therefore, most of the existent classification algorithms assume that all classes are known prior to the training stage. In this work, we propose a methodology for training a neural network that allows it to efficiently detect outof-distribution (OOD) examples without compromising much of its classification accuracy on the test examples from known classes. Based on the Outlier Exposure (OE) technique, we propose a novel loss function that achieves state-of-the-art results in out-of-distribution detection with OE both on image and text classification tasks. Additionally, we experimentally show that the combination of our method with the Mahalanobis distance-based classifier achieves state-of-the-art results in the OOD detection task. 1 INTRODUCTION Modern neural networks have recently achieved superior results in classification problems (Krizhevsky et al., 2012; He et al., 2016). However, most of the classification algorithms proposed so far make the assumption that data generated from all the class conditional distributions are available during training time i.e., they make the closed-world assumption. In an open world environment (Bendale & Boult, 2015), where examples from novel class distributions might appear during test time, it is necessary to build classifiers that are able to detect OOD examples while having high classification accuracy on known class distributions. It is generally known that deep neural networks can make predictions for out-of-distribution (OOD) examples with high confidence (Nguyen et al., 2015). High confidence predictions are undesirable since they consist a symptom of overfitting (Szegedy et al., 2015). They also make the calibration of neural networks difficult. Guo et al. (2017) observed that modern neural networks are miscalibrated by experimentally showing that the average confidence of deep neural networks is usually much higher than their accuracy. A simple yet effective method to address the problem of the inability of neural networks to detect OOD examples is to train them so that they make highly uncertain predictions for examples generated by novel class distributions. In order to achieve that, Lee et al. (2018a) defined a loss function based on the Kullback-Leibler (KL) divergence metric to minimize the distance between the output distribution given by softmax and the uniform distribution for samples generated by a GAN (Goodfellow et al., 2014). Using a similar loss function, Hendrycks et al. (2019) showed that the technique of Outlier Exposure (OE) that draws anomalies from a real and diverse dataset can outperform the GAN framework for OOD detection. Using the OE technique, our main contribution is threefold: • We propose a novel loss function consisting of two regularization terms. The first regularization term minimizes the l1 norm between the output distribution given by softmax and the uniform distribution which constitutes a distance metric between the two distributions (Deza & Deza, 2009). The second regularization term minimizes the Euclidean distance between the training accuracy of a DNN and its average confidence in its predictions on the training set. • We experimentally show that the proposed loss function outperforms the previous work of Hendrycks et al. (2019) and achieves state-of-the-art results in OOD detection with OE both on image and text classification tasks. • We experimentally show that our proposed method can be combined with the Mahalanobis distance-based classifier (Lee et al., 2018b). The combination of the two methods outperforms the original Mahalanobis method in all of the experiments and to the best of our knowledge, achieves state-of-the-art results in the OOD detection task. 2 RELATED WORK Yu et al. (2017) used the GAN framework (Goodfellow et al., 2014) to generate negative instances of seen classes by finding data points that are close to the training instances but are classified as fake by the discriminator. Then, they used those samples in order to train SVM classifiers to detect examples from unseen classes. Similarly, Kliger & Fleishman (2018) used a multi-class GAN framework in order to produce a generator that generates a mixture of nominal data and novel data and a discriminator that performs simultaneous classification and novelty detection. Hendrycks & Gimpel (2017) proposed a baseline for detecting misclassified and out-of-distibution examples based on their observation that the prediction probability of out-of-distribution examples tends to be lower than the prediction probability for correct examples. Recently, Corbière et al. (2019) also studied the problem of detecting overconfident incorrect predictions. A single-parameter variant of Platt scaling (Platt, 1999), temperature scaling, was proposed by Guo et al. (2017) for calibration of modern neural networks. For image data, based on the idea of Hendrycks & Gimpel (2017), Liang et al. (2018) observed that simultaneous use of temperature scaling and small perturbations at the input can push the softmax scores of in- and out-of-distribution images further apart from each other, making the out-of-distribution images distinguishable. Lee et al. (2018a) generated GAN examples and forced the neural network to have lower confidence in predicting their classes. Hendrycks et al. (2019) substituted the GAN samples with a real and diverse dataset using the technique of OE. Similar works (Malinin & Gales, 2018; Bevandić et al., 2018) also force the model to make uncertain predictions for OOD examples. Using an ensemble of classifiers, Lakshminarayanan et al. (2017) showed that their method was able to express higher uncertainty in OOD examples. Liu et al. (2018) provided theoretical guarantees for detecting OOD examples under the assumption that an upper bound of the fraction of OOD examples is available. Under the assumption that the pre-trained features of a softmax neural classifier can be fitted well by a class-conditional Gaussian distribution, Lee et al. (2018b) defined a confidence score using the Mahalanobis distance that can efficiently detect abnormal test samples. As also mentioned by Lee et al. (2018b), Euclidean distance can also be used but with less efficiency. We prefer to call these methods Distance-Based Post-Training (DBPT) methods for OOD detection. 3 SIMULTANEOUS CLASSIFICATION AND OUT-OF-DISTRIBUTION DETECTION We consider the multi-class classification problem under the open-world assumption (Bendale & Boult, 2015), where samples from some classes are not available during training. Our task is to design deep neural network classifiers that can achieve high accuracy on examples generated by a learned probability distribution called Din while at the same time, they can effectively detect examples generated by a different probability distribution calledDout during the test phase. The examples generated by Din are called in-distribution while the examples generated by Dout are called outof-distribution (OOD). Adopting the idea of Outlier Exposure (OE) proposed by Hendrycks et al. (2019), we train the neural network using training examples sampled from Din and DOEout . During the test phase, we evaluate the OOD detection capability of the neural network using examples sampled from Dtestout , where D OE out and D test out are disjoint. Lee et al. (2018a) and Hendrycks et al. (2019) used the KL divergence metric in order to minimize the distance between the output distribution produced by softmax for the OOD examples and the uniform distribution. In our work, we choose to minimize the l1 norm between the two distributions which has shown great success in machine learning applications. Viewing the knowledge of a model as the class conditional distribution it produces over outputs given an input (Hinton et al., 2015), the entropy of this conditional distribution can be used as a regularization method that penalizes confident predictions of a neural network (Pereyra et al., 2017). In our approach, instead of penalizing the confident predictions of posterior probabilities yielded by a neural network, we force it to make predictions for examples generated by Din with an average confidence close to its training accuracy. In such a manner, not only do we make the neural network avoid making overconfident predictions, but we also take into consideration its calibration (Guo et al., 2017). Let us consider a classification model that can be represented by a parametrized function fθ, where θ stands for the vector of parameters in fθ. Without loss of generality, assume that the cross entropy loss function is used during training. We propose the following constrained optimization problem for finding θ: minimize θ E(x,y)∼Din [LCE(fθ(x), y)] subject to Ex∼Din [ max l=1,...,K ( ezl∑K j=1 e zj )] = Atr max l=1,...,K ( ezl∑K j=1 e zj ) = 1 K , ∀x(i) ∼ DOEout (1) where LCE is the cross entropy loss function and K is the number of classes available in Din. Even though the constrained optimization problem (1) can be used for training various classification models, for clarity we limit our discussion to deep neural networks. Let z denote the vector representation of the example x(i) in the feature space produced by the last layer of the deep neural network (DNN) and let Atr be the training accuracy of the DNN. Observe that the optimization problem (1) minimizes the cross entropy loss function subject to two additional constraints. The first constraint forces the average maximum prediction probabilities calculated by the softmax layer towards the training accuracy of the DNN for examples sampled from Din, while the second constraint forces the maximum probability calculated by the softmax layer towards 1K for all examples sampled from the probability distribution DOEout . In other words, the first constraint makes the DNN predict examples from known classes with an average confidence close to its training accuracy, while the second constraint forces the DNN to be highly uncertain for examples of classes it has never seen before by producing a uniform distribution at the output for examples sampled from the probability distribution DOEout . It is also worth noting that the first constraint of (1) uses the training accuracy of the neural network Atr which is not available in general. To handle this issue, one can train a neural network by only minimizing the cross entropy loss function for a few number of epochs in order to calculate Atr and then fine-tune it using (1). Because solving the nonconvex constrained optimization problem described by (1) is extremely difficult, let us introduce Lagrange multipliers (Boyd & Vandenberghe, 2004) and convert it into the following unconstrained optimization problem: minimize θ E(x,y)∼Din [LCE(fθ(x), y)] + λ1 ( Atr − Ex∼Din [ max l=1,...,K ( ezl∑K j=1 e zj )]) + λ2 ∑ x(i)∼DOEout ( 1 K − max l=1,...,K ( ezl∑K j=1 e zj )) (2) where it is worth mentioning that in (2), we used only one Lagrange multiplier for the second set of constraints in (1) instead of using one for each constraint in order to avoid introducing a large number of hyperparameters to our loss function. This modification is a special case where we consider the Lagrange multiplier λ2 to be common for each individual constraint involving a different x(i) ∼ DOEout . Note also that according to the original Lagrangian theory, one should optimize the objective function of (2) both with respect to θ,λ1 and λ2 but as it commonly happens in machine learning applications, we approximate the original problem by calculating appropriate values for λ1 and λ2 through a validation technique (Hastie et al., 2001). After converting the constrained optimization problem (1) into an unconstrained optimization problem as described by (2), it is possible that at each training epoch, the maximum prediction probability produced by softmax for each example drawn from DOEout changes, introducing difficulties in making the DNN produce a uniform distribution at the output for those examples. For instance, assume that we have a K-class classifier with K = 3 and at epoch tn, the maximum prediction probability produced by softmax for an example x(i) ∼ DOEout corresponds to the second class. Then, the last term of (2) will push the prediction probability of example x(i) for the second class towards 13 while concurrently increasing the prediction probabilities for either the first class or the third class or both. At the next epoch tn+1, it is possible that the prediction probability for either the first class or the third class becomes the maximum among the three and hence, the last term of (2) will push that one towards 13 by possibly increasing again the prediction probability for the second class. It becomes obvious that this process introduces difficulties in making the DNN produce a uniform distribution at the output for examples sampled from DOEout . However, this issue can be resolved by concurrently pushing all the prediction probabilities produced by the softmax layer for examples drawn from DOEout towards 1 K . Additionally, in order to prevent the second and the third term of (2) from taking negative values during training, let us convert (2) into the following: minimize θ E(x,y)∼Din [LCE(fθ(x), y)] + λ1 ( Atr − Ex∼Din [ max l=1,...,K ( ezl∑K j=1 e zj )])2 + λ2 ∑ x(i)∼DOEout K∑ l=1 ∣∣∣∣∣ 1K − ezl∑K j=1 e zj ∣∣∣∣∣ (3) The second term of the the loss function described by (3) minimizes the squared distance between the training accuracy of the DNN and the average confidence in its predictions for examples drawn fromDin. Additionally, the third term of (3) minimizes the l1 norm between the uniform distribution and the distribution produced by the softmax layer for the examples drawn from DOEout . While converting the unconstrained optimization problem (2) into (3), one could use several combinations of norms to minimize. However, we found that minimizing the squared distance between the training accuracy of the DNN and the average confidence in its predictions for examples drawn from Din and the l1 norm between the uniform distribution and the distribution produced by the softmax layer for the examples drawn from DOEout works best. This is because l1 norm uniformly attracts all the prediction probabilities produced by softmax to the desired value 1K , better contributing to producing a uniform distribution at the output of the DNN for the examples drawn from DOEout . On the other hand, minimizing the squared distance between the training accuracy of the DNN and the average confidence in its predictions for examples drawn from Din emphasizes more on attracting the maximum softmax probabilities that are further away from the average confidence of the DNN, making the neural network better detect in- and out-of-distribution examples at the low softmax probability levels. 4 EXPERIMENTS During the experiments, we observed that if we start training the DNN with a relatively high value of λ1, the learning process might slow down since we constantly force the neural network to make predictions with an average confidence close to its training accuracy. Therefore, it is recommended to split the training of the algorithm into two stages where in the first stage, we train the DNN using only the cross entropy loss function until it reaches the desired level of accuracy Atr and then using a fixed Atr, we fine-tune it using the combined loss function given by (3). 4.1 COMPARISON WITH STATE-OF-THE-ART IN OE The experimental setting is as follows. We draw samples from Din and we train the DNN until it reaches the desired level of accuracy Atr. Then, drawing samples from DOEout , we fine-tune it using the combined loss function given by (3). During the test phase, we evaluate the OOD detection capability of the DNN using examples from Dtestout which is disjoint from D OE out . We demonstrate the effectiveness of our method in both image and text classification tasks by comparing it with the previous OOD detection with OE method proposed by Hendrycks et al. (2019). A part of our experiments was based on the publicly available code of Hendrycks et al. (2019). 4.1.1 EVALUATION METRICS Our OOD detection method belongs to the class of Maximum Softmax Probability (MSP) detectors (Hendrycks & Gimpel, 2017) and therefore, we adopt the evaluation metrics used in Hendrycks et al. (2019). Defining the OOD examples as the positive class and the in-distribution examples as the negative class, the performance metrics associated with OOD detection are the following: • False Positive Rate at N% True Positive Rate (FPRN): This performance metric (Balntas et al., 2016; Kumar et al., 2016) measures the capability of an OOD detector when the maximum softmax probability threshold is set to a predefined value. More specifically, assuming N% of OOD examples need to be detected during the test phase, we calculate a threshold in the softmax probability space and given that threshold, we measure the false positive rate, i.e. the ratio of indistribution examples that are incorrectly classified as OOD. • Area Under the Receiver Operating Characteristic curve (AUROC): In the out-of-distribution detection task, the ROC curve (Davis & Goadrich, 2006) summarizes the performance of an OOD detection method for varying threshold values. • Area Under the Precision-Recall curve (AUPR): The AUPR (Manning & Schütze, 1999) is an important measure when there exists a class-imbalance between OOD and in-distribution examples in a dataset. As in Hendrycks et al. (2019), in our experiments, the ratio of OOD and in-distribution test examples is 1:5. 4.1.2 IMAGE CLASSIFICATION EXPERIMENTS Results. The results of the image classification experiments are shown in Table 1. In Figure 1, as an example, we plot the histogram of softmax probabilities using CIFAR-10 as Din and Places365 as Dtestout . The detailed description of the image datasets used in the image OOD detection experiments is presented in Appendix A.2. Network Architecture and Training Details. Similar to Hendrycks et al. (2019), for CIFAR 10 and CIFAR 100 experiments, we used 40-2 wide residual networks (WRNs) proposed by Zagoruyko & Komodakis (2016). We initially trained the WRN for 100 epochs using a cosine learning rate (Loshchilov & Hutter, 2017) with an initial value 0.1, a dropout rate of 0.3 and a batch size of 128. As in Hendrycks et al. (2019), we also used Nesterov momentum and l2 weight regularization with a decay factor of 0.0005. For CIFAR 10, we fine-tuned the network for 15 epochs minimizing the loss function given by (3) using a learning rate of 0.001, while for the CIFAR 100 the corresponding number of epochs was 20. For the SVHN experiments, we trained 16-4 WRNs using a learning rate of 0.01, a dropout rate of 0.4 and a batch size of 128. We then fine-tuned the network for 5 epochs using a learning rate of 0.001. During fine-tuning, the 80 Million Tiny Images dataset was used as DOEout . The values of the hyperparameters λ1 and λ2 were chosen in the range [0.03, 0.09] using a separate validation datasetDvalout similar to Hendrycks et al. (2019). We note that Dvalout and D test out are disjoint. The data used for validation are presented in Appendix A.3. Contribution of each regularization term. To demonstrate the effect of each regularization term of the loss function described by (3) in the OOD detection task, we ran some additional image classification experiments which are presented in Table 2. For these experiments, we incrementally added each regularization term to the loss function described by (3) and we measured its effect both in the OOD detection evaluation metrics as well as in the accuracy of the DNN on the test images of Din. The results of these experiments validate that the combination of the two regularization terms of (3) not only improves the OOD detection performance of the DNN but also improves its accuracy on the test examples of Din compared to the case where λ1 = 0. Table 2 also demonstrates that our method can significantly improve the OOD detection performance of the DNN compared to the case where only the cross-entropy loss is minimized at the expense of only an insignificant degradation in the test accuracy of the DNN on examples generated by Din. 4.1.3 TEXT CLASSIFICATION EXPERIMENTS Results. The results of the text classification experiments are shown in Table 3. The detailed description of the text datasets used in the NLP OOD detection experiments is presented in Appendix B.1. Network Architecture and Training Details. For all text classification experiments, similar to Hendrycks et al. (2019), we train 2-layer GRUs (Cho et al., 2014) for 5 epochs with learning rate 0.01 and a batch size of 64 and then we fine-tune them for 2 epochs using the loss function given by (3). During fine-tuning, the WikiText-2 dataset was used asDOEout . The values of the hyperparameters λ1 and λ2 were chosen in the range [0.04, 0.1] using a separate validation dataset as described in Appendix B.2. 4.2 A COMBINATION OF OE AND DBPT METHODS FOR OOD DETECTION Lee et al. (2018b) proposed a DBPT method for OOD detection that can be applied to any pretrained softmax neural classifier. Under the assumption that the pre-trained features of a DNN can be fitted well by a class-conditional Gaussian distribution, they defined the confidence score using the Mahalanobis distance with respect to the closest class-conditional probability distribution, where its parameters are chosen as empirical class means and tied empirical covariance of training samples (Lee et al., 2018b). To further distinguish in- and out-of-distribution examples, they proposed two additional techniques. In the first technique, they added a small perturbation before processing each input example to increase the confidence score of their method. In the second technique, they proposed a feature ensemble method in order to obtain a better calibrated score. The feature ensemble method extracts all the hidden features of the DNN and computes their empirical class mean and tied covariances. Subsequently, it calculates the Mahalanobis distance-based confidence score for each layer and finally calculates the weighted average of these scores by training a logistic regression detector using validation samples in order to calculate the weight of each layer at the final confidence score. Since the Mahalanobis distance-based classifier proposed by Lee et al. (2018b) is a post-training method, it can be combined with our proposed loss function described by (3). More specifically, in our experiments, we initially trained a DNN using the standard cross entropy loss function and then we fine-tuned it with the proposed loss function given by (3). After fine-tuning, we applied the Mahalanobis distance-based classifier and we compared the obtained results against the results presented in Lee et al. (2018b). The simulation experiments on image classification tasks show that the combination of our method which belongs to the OE “family” of methods and the Mahalanobis distance-based classifier which belongs to the “family” of DBPT methods achieves state-of-the-art results in the OOD detection task. A part of our experiments was based on the publicly available code of Lee et al. (2018b). 4.2.1 EVALUATION METRICS To demonstrate the adaptability of our method, in these experiments, we adopt the OOD detection evaluation metrics used in Lee et al. (2018b). • True Negative Rate at N% True Positive Rate (TNRN): This performance metric measures the capability of an OOD detector to detect true negative examples when the true positive rate is set to 95%. • Area Under the Receiver Operating Characteristic curve (AUROC): In the out-of-distribution detection task, the ROC curve (Davis & Goadrich, 2006) summarizes the performance of an OOD detection method for varying threshold values. • Detection Accuracy (DAcc): As also mentioned in Lee et al. (2018b), this evaluation metric corresponds to the maximum classification probability over all possible thresholds : 1−min {Din(q(x) ≤ )P (x is from Din) +Dout(q(x) > )P (x is from Dout)}, where q(x) is a confidence score. Similar to Lee et al. (2018b), we assume that P (x is from Din) = P (x is from Dout). 4.2.2 EXPERIMENTAL SETUP To demonstrate the adaptability and the effectiveness of our method, we adopt the experimental setup of Lee et al. (2018b). We train ResNet (He et al., 2016) with 34 layers using CIFAR-10, CIFAR-100 and SVHN datasets asDin. For the CIFAR experiments, SVHN, TinyImageNet (a sample of 10,000 images drawn from the ImageNet dataset) and LSUN are used asDtestout . For the SVHN experiments, CIFAR-10, TinyImageNet and LSUN are used as Dtestout . Both TinyImageNet and LSUN images are downsampled to 32× 32. Similar to Lee et al. (2018b), for the Mahalanobis distance-based classifier, we train the ResNet model for 200 epochs with batch size 128 by minimizing the cross entropy loss using the SGD algorithm with momentum 0.9. The learning rate starts at 0.1 and is dropped by a factor of 10 at 50% and 75% of the training progress, respectively. Subsequently, we compute the Mahalanobis distancebased confidence score using both the input pre-processing and the feature ensemble techniques. The hyper-parameters that need to be tuned are the magnitude of the noise added at each test input example as well as the layer indexes for feature ensemble. Similar to Lee et al. (2018b), both of them are tuned using a separate validation dataset consisting of both in- and out-of-distribution data. Since the Mahalanobis distance-based classifier belongs to the “family” of DBPT methods for OOD detection tasks, it can be combined with our proposed method. More specifically, we initially train the ResNet model with 34 layers for 200 epochs using exactly the same training details as mentioned above. Subsequently, we fine-tune the network with the proposed loss function described by (3) using the 80 Million Tiny Images as DOEout . During fine-tuning, we use the SGD algorithm with momentum 0.9 and a cosine learning rate (Loshchilov & Hutter, 2017) with an initial value 0.001 using a batch size of 128 for data sampled from Din and a batch size of 256 for data sampled from DOEout . For CIFAR-10 and 100 experiments, we fine-tuned the network for 30 and 20 epochs respectively, while for SVHN the corresponding number of epochs was 5. The values of the hyperparameters λ1 and λ2 were chosen using a separate validation dataset consisting of both in- and out-of-distribution images similar to Lee et al. (2018b). The results are shown in Table 4. Discussion. The results in Table 4 demonstrate the effectiveness of our method when combined with the Mahalanobis distance-based classifier since it outperforms the original version of the Mahalanobis method proposed by Lee et al. (2018b) in all of the experiments. This result validates the contribution of our technique further, since it does not only achieve state-of-the-art results in OOD detection with OE, but it can be additionally combined with DBPT methods like the Mahalanobis distance-based classifier to achieve state-of-the-art results in the OOD detection task. The superior performance of our method when combined with the Mahalanobis distance-based classifier can be justified by the fact that the latter extracts the learned features from the layer(s) of the DNN and it subsequently uses those features to define a confidence score based on the Mahalanobis distance. The simulation results presented in Table 1 and Table 3 showed that our method can teach the DNN to learn feature representations that can further distinguish in- and out-of distribution data and therefore, the combination of the two methods improves the OOD detection capability of a DNN. 5 CONCLUSION In this paper, we proposed a method for simultaneous classification and out-of-distribution detection. The proposed loss function includes two regularization terms where the first minimizes the l1 norm between the output distribution of the softmax layer of a DNN and the uniform distribution, while the second minimizes the Euclidean distance between the training accuracy of a DNN and its average confidence in its predictions on the training set. Experimental results showed that the proposed loss function achieves state-of-the-art results in OOD detection with OE (Hendrycks et al., 2019) in both image and text classification tasks. Additionally, we experimentally showed that our method can be combined with DBPT methods for OOD detection like the Mahalanobis distance-based classifier (Lee et al., 2018b) and achieves state-of-the-art results in the OOD detection task. A EXPANDED IMAGE OOD DETECTION RESULTS AND DATASETS USED FOR COMPARISON WITH STATE-OF-THE-ART IN OE A.1 IMAGE OOD DETECTION RESULTS A.2 Din , DOEout AND D test out FOR IMAGE EXPERIMENTS SVHN: The Street View House Number (SVHN) dataset (Netzer et al., 2011) consists of 32 × 32 color images out of which 604,388 are used for training and 26,032 are used for testing. The dataset has 10 classes and was collected from real Google Street View images. Similar to Hendrycks et al. (2019), we rescale the pixels of the images to be in [0, 1]. CIFAR 10: This dataset (Krizhevsky & Hinton, 2009) contains 10 classes and consists of 60,000 32 × 32 color images out of which 50,000 belong to the training and 10,000 belong to the test set. Before training, we standardize the images per channel similar to Hendrycks et al. (2019). CIFAR 100: This dataset (Krizhevsky & Hinton, 2009) consists of 20 distinct superclasses each of which contains 5 different classes giving us a total of 100 classes. The total number of images in the dataset are 60,000 and we use the standard 50,000/10,000 train/test split. Before training, we standardize the images per channel similar to Hendrycks et al. (2019). 80 Million Tiny Images: The 80 Million Tiny Images dataset (Torralba et al., 2008) was exclusively used in our experiments in order to represent DOEout . It consists of 32 × 32 color images collected from the Internet. Similar to Hendrycks et al. (2019), in order to make sure that DOEout and D test out are disjoint, we removed all the images of the dataset that appear on CIFAR 10 and CIFAR 100 datasets. Places365: Places365 dataset introduced by Zhou et al. (2018) was exclusively used in our experiments in order to represent Dtestout . It consists of millions of photographs of scenes. Gaussian: A synthetic image dataset created by i.i.d. sampling from an isotropic Gaussian distribution. Bernoulli: A synthetic image dataset created by sampling from a Bernoulli distribution. Blobs: A synthetic dataset of images with definite edges. Icons-50: This dataset intoduced by Hendrycks & Dietterich (2018) consists of 10,000 images belonging to 50 classes of icons. As part of preprocessing, we removed the class “Number” in order to make it disjoint from the SVHN dataset. Textures: This dataset contains 5,640 textural images (Cimpoi et al., 2014). LSUN: It consists of around 1 million large-scale images of scenes (Yu et al., 2015). Rademacher: A synthetic image dataset created by sampling from a symmetric Rademacher distribution. A.3 VALIDATION DATA FOR IMAGE EXPERIMENTS Uniform Noise: A synthetic image dataset where each pixel is sampled from U [0, 1] or U [−1, 1] depending on the input space of the classifier. Arithmetic Mean: A synthetic image dataset created by randomly sampling a pair of in-distribution images and subsequently taking their pixelwise arithmetic mean. Geometric Mean: A synthetic image dataset created by randomly sampling a pair of in-distribution images and subsequently taking their pixelwise geometric mean. Jigsaw: A synthetic image dataset created by partitioning an image sampled from Din into 16 equally sized patches and by subsequently permuting those patches. Speckle Noised: A synthetic image dataset created by applying speckle noise to images sampled from Din. Inverted Images: A synthetic image dataset created by shifting and reordering the color channels of images sampled from Din. RGB Ghosted: A synthetic image dataset created by inverting the color channels of images sampled from Din. B EXPANDED TEXT OOD DETECTION RESULTS AND DATASETS USED FOR COMPARISON WITH STATE-OF-THE-ART IN OE B.1 Din , DOEout AND D test out FOR NLP EXPERIMENTS 20 Newsgroups: This dataset contains 20 different newsgroups, each corresponding to a specific topic. It contains around 19,000 examples and we used the standard 60/40 train/test split. TREC: A question classification dataset containing around 6,000 examples from 50 different classes. Similar to Hendrycks et al. (2019), we used 500 examples for the test phase and the rest for training. SST: The Stanford Sentiment Treebank (Socher et al., 2013) is a binary classification dataset for sentiment prediction of movie reviews containing around 10,000 examples. WikiText-2: This dataset contains over 2 million articles from Wikipedia and is exclusively used as DOEout in our experiments. We used the same preprocessing as in Hendrycks et al. (2019) in order to have a valid comparison. SNLI: The Stanford Natural Language Inference (SNLI) corpus is a collection of 570,000 humanwritten English sentence pairs (Bowman et al., 2015). IMDB: A sentiment classification dataset containing movies reviews. Multi30K: A dataset of English and German descriptions of images (Elliott et al., 2016). For our experiments, only the English descriptions were used. WMT16: A dataset used for machine translation tasks. For our experiments, only the English part of the test set was used. Yelp: A dataset containing reviews of users for businesses on Yelp. EWT: The English Web Treebank (EWT) consists of 5 different datasets: weblogs (EWT-W), newsgroups (EWT-N), emails (EWT-E), reviews (EWT-R) and questions-answers (EWT-A). B.2 VALIDATION DATA FOR NLP EXPERIMENTS The validation dataset Dvalout used for the NLP OOD detection experiments was constructed as follows. For each Din dataset used, we used the rest two in-distribution datasets as Dvalout. For instance, during the experiments where 20 Newsgroups represented Din, we used TREC and SST as Dvalout making sure that Dvalout and D test out are disjoint. B.3 TEXT OOD DETECTION RESULTS C TRAINING DETAILS FOR THE EXPERIMENTAL RESULTS FOR COMPARISON WITH MAHALANOBIS DISTANCE-BASED CLASSIFIER During fine-tuning with our proposed loss function given by (3), we used the training details presented in Table 7. The values of the hyper-parameters λ1 and λ2 were chosen using a separate validation dataset consisting of both in- and out-of-distribution images similar to Lee et al. (2018b).
1. What is the focus of the paper regarding out-of-distribution detection in deep neural networks? 2. What are the strengths and weaknesses of the proposed method, particularly its modifications to the original objective? 3. Do you have any concerns regarding the level of novelty and contributions of the paper? 4. How does the reviewer assess the presentation and motivation of the modified loss terms? 5. Are there any questions or concerns regarding the experimental results and comparisons with other works?
Review
Review The paper considers the problem of out-of-distribution detection in the context of image and text classification with deep neural networks. The proposed method is based on [1], where cross-entropy between a uniform distribution and the predictive distribution is maximized on the out-of-distribution data during training. The authors propose two simple modifications to the objective. The first modification is to replace cross-entropy between predictive and uniform distributions with an l1-norm. The second modification is to add a separate loss term that encourages the average confidence on training data to be close to the training accuracy. The authors show that these modifications improve results compared to [1] on image and text classification. There are a few concerns I have for this paper. The main issue is that I am not sure if the level of novelty is sufficient for ICLR. The contributions of the paper consist of a new loss term and a modification of the other loss term in OE [1]. At the same time, the paper achieves an improvement over OE consistently on all the considered problems. Given the limited novelty, experiments aimed at understanding the proposed modification would strengthen the paper. Right now I am leaning towards rejecting the paper, but if authors add more insight into why the proposed modifications help, or provide a strong rebuttal, I may update my score. I discuss the other less general concerns below. 1. I believe the presentation of the method in Section 3 is suboptimal. The authors start with presenting a constrained minimization problem, then convert it to a problem with Lagrange multipliers, then modify the problem in an ad hoc way (adding a square and a norm without any mathematical reason), to get the standard form of loss with regularizers. The presentation would be much cleaner, and wouldn’t lose anything if the authors directly presented the loss with regularizers. Furthermore, in the Lagrange multiplier view the Lagrange multipliers are not hyper-parameters, they are dual variables, and the optimization problem shouldn’t be just with respect to theta, we need to find a stationary point with respect to both lambda and theta. 2. The motivation for changing the distance measure between the predictive distribution on outlier data and the uniform distribution is unclear. The authors state multiple times that KL is not a distance metric, but it isn’t clear why this is important. KL is commonly used as a measure of distance between distributions. One could also use symmetrized KL in order to get a distance metric that is similar to KL. I am not opposed to just using l1-norm because it performs better, but if the switch of distance measures is listed as one of the two main methodological contributions, I believe more insight needs to be provided for why it helps. 3. The motivation for the other loss term which is enforcing calibration of uncertainty on train data is also not very clear. At least in image classification, strong networks typically achieve perfect accuracy (or close to that) on the train set, and then the proposed loss term would basically push the predictive confidence on all training data to 1, which is already enforced by the standard cross-entropy loss. Does outlier exposure prevent the classifier from getting close to 100% accuracy on train? What is the train accuracy for the experiments on CIFAR-10, CIFAR-100 and SVHN? 4. While the authors report accuracy of out-of-distribution detection in the experiments, they don’t report the accuracy of the actual classifier on in-distribution data. Is this accuracy similar for the proposed method and OE? Is the accuracy also similar for the proposed method and baseline for the experiment in section 4.4? 5. The method is only being compared to the OE method of [1]. Why is this comparison important, and are there other methods that the authors could compare against? [1] Deep Anomaly Detection with Outlier Exposure. Dan Hendrycks, Mantas Mazeika, Thomas Dietterich
ICLR
Title Calibration for Decision Making via Empirical Risk Minimization Abstract Neural networks for classification can achieve high accuracy but their probabilistic predictions may be not well-calibrated, in particular overconfident. Different general calibration measures and methods were proposed. But how exactly does the calibration affect downstream tasks? We derive a new task-specific definition of calibration for the problem of statistical decision making with a known cost matrix. We then show that so-defined calibration can be theoretically rigorously improved by minimizing the empirical risk in the adjustment parameters like temperature. For the empirical risk minimization, which is not differentiable, we propose improvements to and analysis of the direct loss minimization approach. Our experiments indicate that task-specific calibration can perform better than a generic one. But we also carefully investigate weaknesses of the proposed tool and issues in the statistical evaluation for problems with highly unbalanced decision costs. 1 INTRODUCTION The notion of calibration originates in forecasting, in particular in meteorology. The following example well explains the concept. Amongst all days when a forecast was made that the chance of rain is 33%, one would expect to find that about a third of them to be rainy and two thirds sunny. If this is the case for all forecasts, the predictor is said to be well-calibrated. We would like the forecaster to be accurate, however if this is not achievable, we would like at least to have it accurately reflect what it does not know, i.e. to be calibrated. The most common model for classification in deep learning consists of a softmax predictive distribution for class labels atop of a deep architecture processing the observation. In view of the excessive number of parameters there is a natural concern whether such models learn accurate predictive probabilities p(y|x). Indeed, neural networks are typically not well calibrated (Guo et al., 2017), in particular they may be over-confident, i.e., being incorrect much more often than their high confidence suggests. Like in the example above, such over-confidence is misleading for interpreting the results. It is also commonly understood that it can mislead any downstream processing relying on the predictive probabilities. There has been therefore a substantial effort to improve the calibration. Unfortunately, even measuring the basic confidence calibration accurately in practice remains challenging (Nixon et al., 2019). In the multi-class setting a vector of class probabilities is output and all of them can be important for the downstream processing. This has led to development of more complex definitions such as distribution-calibration (Vaicenavicius et al., 2019). In this setting, despite the development of new estimators (Vaicenavicius et al., 2019; Widmann et al., 2019), it is practically infeasible to obtain a reliable estimate — there is not enough data to reject the hypothesis that the model is well calibrated. Are we lost then in the attempt to make the predictive probabilities reliable? Not necessarily. Observe that these calibration definitions, both simple and complex ones, are not considering a specific problem downstream that can be hypothetically affected by the poor probabilistic predictions. They try to address all such problems (as well as the purpose of interpretability) at once. We will argue that considering a specific downstream task allows to significantly reduce the complexity of the calibration problem, making it feasible in practice. As a specific downstream task we will consider the Bayesian decision making with a trained NN. It is well-established to train NNs for classification by optimizing the cross-entropy loss. The model architecture and the training pipeline are tuned by researchers to achieve the best generalization w.r.t. classification accuracy. However, this procedure does not take into account different costs of different classification mistakes. To give an example, mistakes in miss-classifying different mushrooms may have different costs for eating: some mistakes are between equally good spices and incur no cost while other mistakes lead to risks of poisoning. Given a finite set of decisions D and a cost matrix l(y, d), one could try to adapt the learned NN model to form the Bayesian decisions strategy q(x) achieving the smallest risk: q(x) = arg min d ∑ y p(y|x)l(y, d). (1) Such adaptation is practically desirable: it would allow one to rely on the established and tuned training approach and reuse the existing models, which may be very costly to retrain from scratch. However, because the model p(y|x) is typically inaccurate in predicting all the class probabilities, this may lead to suboptimal decisions and respectively poor outcomes of such adaptation. In particular, an overconfident model will result in both: making sub-optimal decisions and underestimating their expected cost. One could reasonably hope to learn a more accurate predictor by designing a yet better architecture and using more training data. It may nevertheless stay poorly calibrated and not suitable for the above adaptation. If the strong distribution calibration was possible to achieve as a post-processing step, the adaptation would work perfectly, however this calibration is practically not feasible. On the other hand, any weaker task-unspecific definition of calibration (e.g. class-wise calibration, Vaicenavicius et al. 2019) would not work: there would exist a decision task for which (1) would perform poorly. A formalization of calibration important for (a class of) decision making problems was proposed, only recently, by Zhao et al. (2021). As our main theoretical result, we derive a notion of calibration important for adopting strategy (1) with a given cost matrix. The formalism and tools used for that allow also to better understand the relation between existing calibration methods, in particular empirically successful temperature scaling variants (Guo et al., 2017; Alexandari et al., 2020), and the distribution calibration. Specifically, we show that these methods are guaranteed to improve the expected miscalibration as measured by the corresponding divergence. Calibrating the model w.r.t. the new definition is shown to be equivalent to minimizing the (empirical) risk of strategy (1) in the calibration parameters. As a mean of empirical risk minimization we study the direct loss approach and relate it to margin-rescaling. Experimentally, we show that task-aware calibration, using the direct loss approach, can outperform the generic calibration. However we also observe that for tasks with extremely unbalanced losses, modeling a dangerous class, we lack reliable means to assess the quality of calibration. 2 RELATED WORK We consider all methods that can improve the predictive model p(y|x) when provided some additional calibration data as calibration methods. Typically, the calibration is achieved by a postprocessing of the scores or predictive probabilities such as temperature scaling, bias-corrected temperature scaling or vector scaling. We regard these as different choices of parametrization, i.e., choice of the degrees of freedom to calibrate. Most importantly, existing calibration methods differ in the criterion they optimize. Calibration Unaware of the Task Many calibration techniques, while motivated by the notion and a particular definition of calibration, use a generic criterion unrelated to that notion. A very practical method to calibrate a model turns out to be the likelihood maximization, i.e. relying on the same criterion that is commonly used for training. This is the approach taken in Guo et al. (2017); Alexandari et al. (2020); Kull et al. (2019). Methods optimizing variants of the expected calibration error (ECE), which is a measure of miscalibration, were compared by Nixon et al. (2019), however there is no performance criterion other than ECE itself. Further variants are piece-wise parametric (Kumar et al., 2019) and kernel-based (Kumar et al., 2018) confidence calibration methods. Calibration Aware of the Task The decision-calibration of Zhao et al. (2021) takes the decision problem into the consideration. It will be discussed in detail below. Their calibration method is designed for a given decision space, but cannot make use of a specific cost matrix was it known at the calibration time. Empirical Risk Minimization Methods optimizing the empirical risk (Song et al., 2016; Vlastelica et al., 2019; Taskar et al., 2005) were used for training with complex objectives measuring performance in retrieval, ranking, or structured prediction. They have not been considered for calibration of classification models before. They address the difficulty of non-differentiability of the loss and have a potential to exploit the full information about the cost matrix. 3 BACKGROUND Let X be the space of observations and Y be the set of labels. Assume there is an underlying true joint probability distribution on X ×Y , denoted p∗. Let (X,Y ) be a a pair of random variables with the law p∗. All expectations and probabilities will be meant with respect to (X,Y ). Let ∆ denote the simplex of probabilities over Y . Let p(y|x) be a probabilistic predictor, usually a neural network with softmax output. The predictor can be considered as a mapping π : X → ∆: x 7→ p(y|x). 3.1 GENERAL CALIBRATION First works analyzing calibration in machine learning (Guo et al., 2017) were concerned only with the confidence of the model, i.e. the model’s probability of the class actually predicted. Let ŷ(x) = arg maxk πk(x) be the label predicted by the model and c(X) = maxk πk(X) the respective predictive probability, called confidence. Definition 1. The model is confidence calibrated if P ( Y=ŷ(X) | c(X) ) a.s. = c(X). (2) It requires that amongst all data points for which the prediction has confidence c the expected occurrence of the true label to match c. The respective miscalibration can be measured e.g., by the Expected Calibration Error (ECE) (Degroot & Fienberg, 1983), which is typically estimated by discretizing the probability interval into bins. Substantial efforts were put into calibrating neural networks in this sense (e.g., Naeini et al. 2015; Guo et al. 2017; Nixon et al. 2019). However, Kumar et al. (2019) argue that the binning underestimates the calibration error and in fact, an accurate estimation is possible only when the predictor π outputs only a discrete set of values. In the multi-class setting, confidence calibration may be insufficient. There may be downstream tasks which require the whole vector of predicted probabilities to be accurate. In the machine learning literature this came into attention only recently (Vaicenavicius et al., 2019). The strongest notion of calibration (Bröcker 2009, reliability Eq. 1), is as follows. Definition 2. A predictor π : X → ∆, is called distribution calibrated if (∀y ∈ Y) P ( Y=y | π(X) ) a.s. = π(X)y. (3) In words: amongst all data points of the input space where the predicted vector of probabilities is π(x) = µ the true observed class labels should be distributed as µ. Respectively, the predictor φ[π](x)y = P ( Y=y | π(X)=π(x) ) (4) is the (optimal) calibration of π: it takes the prediction π(x) and turns it into the true distribution of labels under that initial prediction. This predictor φ[π] is distribution calibrated and Definition 2 can be restated as φ[π](X) a.s.= π(X), i.e. the calibration of π is π itself. Generalizing on ECE, the expected miscalibration of π w.r.t. divergence D : ∆×∆→ R is: E[D(π(X), φ[π](X))], (5) i.e. the average divergence between the predicted distribution and its calibration. It is hard to estimate in practice, because of conditioning on a real vector π(X) = π(x) in the definition of calibration φ[π]. It becomes tricky to verify whether a model is calibrated using only a finite data sample. Different methods have been proposed based on binning of ∆ (Vaicenavicius et al., 2019) or using kernel-based divergences (Widmann et al., 2019). Unfortunately, statistical tests based on unbiased estimates (Widmann et al., 2019) were unable to reject the hypothesis that any basic neural network in a real setting, such as on MNIST data, is already calibrated. No calibration methods were proposed based on this miscalibration. 3.2 CALIBRATION FOR STATISTICAL DECISION MAKING Let us consider the classical statistical decision making problem for a known model p∗. Let D be a finite decision space. Consider the cost matrix l : Y ×D −→ R+ and a decision strategy q : X → D. The risk of the strategy q and the optimal (Bayesian) decision strategy are, respectively: R∗[q] = E [ l(Y, q(X)) ] , q∗(x) = arg min d ∑ y p∗(y|x) [ l(y, d) ] . (6) In practice, we do not have access to the true distribution p∗ to make decisions, only to the model p(y|x). Let f(d, x) = ∑ y p(y|x)l(y, d) denote the model-based conditional risk for observation x. The model-based risk and model-based Bayesian decision strategy are, respectively: R̂[q] = E [ f(d, x) ] , q̂(x) = arg min d f(d, x). (7) If the model p was distribution-calibrated, these two risks would coincide: R̂[q] = R∗[q] for any strategy q and any cost matrix (Zhao et al., 2021). However, as was discussed above, distribution calibration is hard even to measure in practice. This has led to the following definition. Definition 3 (Zhao et al. 2021). For a set of cost matrices L and a set of strategies Q, the predictor π is called (L, Q)-decision calibrated if for all l ∈ L and q ∈ Q the model risk matches the true risk: R̂[q] = R∗[q]. Zhao et al. (2021) show that this definition generalizes previous notions of calibration by specifying the corresponding statistical decision problems. In particular confidence calibration corresponds to recognition with the reject option with a varied cost of rejecting. The distribution calibration can be understood as (L, Q)-decision calibration for all possible loss functions and decision strategies over all possible decision spaces D, which is clearly far too general. It follows from the definition that a decision-calibrated model must accurately estimate the true risk and that the model-based strategy q̂ is the optimum of the true risk over Q. Therefore (L,Q)decision calibration is sufficient for any statistical decision making task with l ∈ L and q ∈ Q. 4 METHOD The condition of (L, Q) decision calibration (Zhao et al., 2021) is still unnecessarily stringent if we have a specific fixed cost matrix l and are interested in the performance of only one particular decision strategy: the model-based Bayesian strategy q̂, (7). Their calibration algorithm is derived under the assumption that L is the set of all cost matrices of bounded norm over a fixed decision space and thus cannot be chose as e.g. L = {l}. We will show that the minimization of the risk of the model-based strategy, R∗[q̂], can improve a precise measure of calibration under a known cost matrix while also obviously not compromising on the task-specific performance metric, which is the risk R∗[q̂] itself. 4.1 CALIBRATION VIA LOSS MINIMIZATION Bröcker (2009) showed that any loss function corresponding to a proper scoring rule satisfies a decomposition into uncertainty, resolution (sharpness) and reliability (miscalibration). A scoring rule S is a function ∆ × Y → R and the expected score, which we call loss for brevity, is L[π] = E[S(π(X), Y )]. For example, the negative log likelihood loss (NLL) corresponds to the scoring rule S(π, y) = − log πy . The decomposition reads L[π] = H(π̄)︸ ︷︷ ︸ uncertainty of Y −E [ D(π̄, φ[π](X)) ]︸ ︷︷ ︸ resolution of π +E [ D(π(X), φ[π](X)) ]︸ ︷︷ ︸ reliability of π , (8) where π̄ is the a priori distribution of labels: π̄y = p∗(y), φ[π] is the calibration of π (4), and H and D are particular entropy and the divergence functions corresponding to the score S. In case of NLL, they are the Shannon entropy and the Kullback–Leibler divergence. Prominently, the reliability term in this decomposition is exactly the expected miscalibration (5) w.r.t. the score-specific divergence D. If we substitute φ[π] as a predictor, we will find out that it has a zero expected miscalibration while the first two terms remain the same: L[φ[π]] = H(π̄)− E [ D(π̄, φ[π](X)) ] ≤ L[π], (9) where the equality uses the fact that φ[φ[π]] = φ[π] and the inequality is due to divergence being always non-negative. Thus φ[π] not only achieves distribution calibration but also is guaranteed not to decrease all losses corresponding to proper scoring rules. This sheds some light on why optimizing NLL is good for calibration as evidenced, e.g., by Guo et al. 2017; Alexandari et al. 2020, in particular improving ECE. Calibration methods often fit a parametric post-processing of a predictor π, such as temperature scaling (Guo et al., 2017). They argue about calibration but optimize NLL. We formally show why this is a perfectly correct idea. Theorem 1. Let π : X → ∆ be a predictor and Tθ : ∆ → ∆ a parametric mapping, invertible for each θ ∈ Θ. Finding an adjusted predictor πθ = Tθ ◦ π minimizing the expected miscalibration is equivalent to minimizing the loss: minθ∈Θ E [ D(πθ, φ[πθ]) ] = minθ∈Θ L[πθ]. (10) Proof. First we show that φ[T ◦ π] is invariant of T for any invertible T . The events T (π(X)) = T (π(x)) and π(X) = π(x) are equal, therefore φ[T ◦ π](x)y = P ( Y=y | T (π(X))=T (π(x)) ) = φ[π](x)y. (11) It follows that D(π̄, φ[T ◦ π](X)) = D(π̄, φ[π](X)). Therefore the first two terms of the decomposition stay the same for any θ. Therefore minimizing the whole loss over θ ∈ Θ is equivalent to minimizing the reliability term alone. This allows to overcome the general difficulty of estimating the expected miscalibration by simply using the empirical estimate of the loss! In particular, no binning of the simplex ∆ is involved. 4.2 DECOMPOSITION OF THE RISK We observe that the true risk of the model-based strategy R∗[q̂] also corresponds to a proper scoring rule and thus can be decomposed according to the theory. Proposition 1. The following scoring rule corresponds to the loss of the model-based decision: S(π, y) = l(y, arg mind ∑ y πyl(y, d)). (12) For two probability distributions π, ρ in ∆, Bröcker (2009) defines the following scoring function s, divergence D and entropy H: s(π, ρ) = ∑ y S(π, y)ρy; D(π, ρ) = s(π, ρ)− s(ρ, ρ); H(ρ) = s(ρ, ρ). (13) In our case, the score s(π(x), p∗(·|x)) is the conditional risk of the prediction q̂(x) and its expectation is the risk of the strategy q̂: E[S(π(X), Y )] = R∗[q̂]. Proposition 2. The score s is proper (Bröcker, 2009), i.e., the “divergence” D is non-negative. Proof. By definition, s(ρ, ρ) = ∑ y S(ρ, y)ρy = ∑ y ρyl(y, arg min d ∑ y ρyl(y, d)) = mind ∑ y ρyl(y, d). (14) Clearly it satisfies s(ρ, ρ) ≤ ∑ y ρyl(y, d̂) for any d̂, in particular d̂ = arg mind ∑ y πyl(y, d). Corollary 1. The decomposition (8) holds for the risk R∗[q̂]. The uncertainty term H(π̄) = mind ∑ y p ∗(y)l(y, d) is just the lowest risk attainable without considering observations. Let us discuss the reliability term. In our case D is not a true divergence as it may vanish even if the two distributions are different. The reliability term is therefore more permissive. This is appropriate, indeed, if e.g., the cost matrix has two identical rows, there is no need to distinguish the respective classes in the prediction and, respectively, no need to have the correct individual predictive probabilities for them. This motivates us to define the task-specific calibration accordingly: Definition 4. Given a cost matrix l and “divergence” Dl defined by (13), a predictor π(X) is ldecision calibrated if Dl ( π(X), φ[π](X) ) a.s. = 0. (15) For any proper divergence, this definition would be equivalent to the distribution calibration in Definition 2. The selectivity of Dl in penalizing differences in the distribution which matter for the decision task is what makes it task-specific. It appears hard to estimate this miscalibration in general as it still involves φ[π]. However, using Theorem 1 we can improve this task-specific calibration in parametric settings (e.g. temperature scaling) by simply minimizing the empirical risk of q̂. 4.3 EMPIRICAL RISK MINIMIZATION Consider a parametric predictor π(x)y = p(y|x; θ) and let f(d, x; θ) = ∑ y p(y|x; θ)l(y, d) as before. Given a sample (xi, yi)Ni=1 from p ∗ the empirical risk minimization for model-based Bayesian strategy reads: minθ 1 N ∑ i l(yi, di) s.t. di = arg min d f(d, xi; θ). (16) This problem is difficult because it is a so called bi-level optimization problem which has discrete decision of the inner problem and a non-linear dependence on θ. Such formulations, where the inner problem corresponds to a general predictor based on solving a combinatorial optimization problem have been studied. Two methods that have been applied to this kind of problems are large margin methods (Tsochantaridis et al., 2005; Taskar et al., 2005) and the direct loss / combinatorial black box minimization (Song et al., 2016; Vlastelica et al., 2019). 4.3.1 DIRECT LOSS AND MARGIN RESCALING The empirical risk can be easily evaluated but cannot be differentiated because of arg min. This arg min over the set of labels can be considered as a small combinatorial solver. We will specialize and analyze direct loss method (Song et al., 2016; Vlastelica et al., 2019) for this case. For simplicity, let us consider a single training sample (x∗, y∗) (with multiple training samples losses and gradients sum up). Let us denote the vector of class probabilities π = p(·|x∗; θ). The estimate of the gradient in π according to the direct loss minimization approach is constructed as follows: d̂ = arg min d f(d, x∗); d̂λ = arg min d [ f(d, x∗)+λl(y∗, d) ] ; ∇̂π := 1λ [l(·, d̂λ)− l(·, d̂)]. (17) Appendix A.1 gives details on how this is obtained from the general method of Vlastelica et al. (2019). The gradient in θ can then be computed by the chain rule. Here d̂ is the solution of the solver (the Bayesian decision) and d̂λ is the decision of a perturbed problem. The strength of the perturbation is controlled by λ. Song et al. (2016) has shown that in the limit λ → 0 the gradient of the expected loss over a continuous data distribution matches E[∇π]. In this limit, stochastic descent with ∇π would directly minimize the (expectation of non-differentiable) loss, which was termed direct loss minimization. However, these arguments are not applicable to a finite training sample. In practice, λ needs to be sufficiently large for ∇π to be non-zero for some data points, at least. In this setting we are not longer minimizing the original loss. However one can define a surrogate loss function such that (17) is its true gradient. We call it the direct loss, so the method can now be validly interpreted as direct loss minimization: L±λ = ± 1 λ ( mind f(d, x ∗)−mind [ f(d, x∗)∓ λl(y∗, d) ]) , (18) where ∓ is paired with ±. Vlastelica et al. (2019) advocate the use of a large λ, define a similar surrogate loss to L−λ and show that it is a lower bound on the empirical loss for positive λ (Observation 3), where the empirical loss is LE = l(y∗, arg mind f(d, x ∗)). Note that there holds L±−λ = L ∓ λ and therefore we can always assume λ > 0 in order to avoid redundancy. We show the following. Proposition 3. Direct loss L−λ is a lower bound on the empirical loss L E and L+λ is an upper bound. The proof is given in Appendix A.1. It follows that the expectation over the training data (resp. true distribution p∗) of L+λ is an upper bound on the empirical risk (resp. true risk). Relation to Margin Rescaling The problem of minimizing empirical risk over discrete strategies of the form arg miny f(y; θ) was also studied in structural prediction (Tsochantaridis et al., 2005; Taskar et al., 2005). One of the most common approaches is called margin re-scaling (Tsochantaridis et al., 2005) and was successfully used in combination with deep networks as well (e.g. Knöbelreiter et al. 2017). Like SVM, it puts a hinge loss on the violation of the classification constraints with the margin proportional to the respective loss. We can show (see Appendix A.2) that the margin re-scaling approach leads to the following surrogate loss: LMRλ = 1 λ ( f(d∗, x∗)−mind [ f(d, x∗)− λl(y∗, d) ]) , (19) where d∗ = arg mind l(y ∗, d) is the best decision given the true class label. Written in this form there is a striking similarity to (18). The only difference being that d∗ is the best decision for a loss (knowing the true label) rather than the best decision based on the model (not knowing the true label). This leads to that margin rescaling is a less tight upper bound. Proposition 4. Margin re-scaling LMRλ coincides with the direct loss L + λ in the region where the classifier makes correct decisions. Furthermore LE ≤ L+λ ≤ LMR+ . The proof is given in Appendix A.2. We believe this connection has not been known before. The two surrogate losses are illustrated in Fig. 1. For both approaches, if λ is small, the size of the margin is small and there is a flat region with zero gradient. As a simple remedy we propose to smooth the minimum in (18) using the smooth minimum function minβ(x) = − 1β log ∑ k e −βxk , where the smoothing degree is controlled by β. 5 EXPERIMENTS In the experiments, we compare different calibration criteria for the same choice of a parametric family. Assuming that networks outputs scores s (or elsewise let sy = log p(y|x)), we consider the following common choices to parametrize the corrected predictor π. TS: Temperature Scaling (Guo et al., 2017): π = softmax(s/T ), where T is a (non-negative) scalar temperature to calibrate. BCTS: Bias-Corrected Temperature Scaling (Alexandari et al., 2020): π = softmax((s + b)/T ), where additionally b is a vector of per-class biases to calibrate. VS: Vector Scaling (Alexandari et al., 2020): π = softmax(s w+ b), where w is a vector of scaling factors, b is a vector of biases and is the coordinate-wise product. We optimize each criterion in the above parameters using Adam optimizer. In order to find hyperparameters (learning rate, λ, β) we use the nested cross-validation procedure detailed in Appendix B. 5.1 FUNGI EDIBILITY (DANISH FUNGI 2020) In this experiment we consider a decision problem of whether to cook (eat) a mushroom given its predicted edibility category, based on the Danish fungi dataset (Picek et al., 2022). In order to compare calibration methods, we create 15 folds of the data that was not used during training into calibration and test parts. Full details can be found in Appendix B. The cost matrix and obtained results are shown in Fig. 2. Calibration with the DirectLoss criterion achieved a lower average test risk in all parametrizations, notably performing well also in the VS parametrization where other criteria performed the worst. The improvement over other methods can be considered statistically significant if one trusts the estimates of the mean and the variance (see below). 5.2 SKIN CANCER LESION TREATMENT (HAM10000) In this experiment, we consider a decision problem of whether to assign a treatment given the lesion classification using skin lesion dataset (Tschandl, 2018). The training is performed on 75% of the data for 100 epochs. From the remaining data we create 100 random splits into calibration (15%) and test (10%). Fig. 3(a) shows that the network significantly underestimates the true risk. After calibration (DirectLoss BCTS), the risk decreases, but the risk gap increases for some data splits. Indeed, with our calibration and optimization criterion being the empirical risk, there is no requirement that this gap should be made small or even decrease. Nevertheless, such increase in the gap is unexpected of a calibration method and might indicate overfitting. Fig. 3 (a,b) show statistics of the differences between pairs: No calibration - DirectLoss and NLL - DirectLoss, confirming that calibration is helpful, but unable to tell whether NLL or DirectLoss is a better calibration objective. Comparisons for TS and VS parametrizations are shown in Figs. B.2 and B.3. 5.3 RARE EXPENSIVE MISTAKES We present a failure mode of calibration on the example of CIRAF10 dataset with the trucks class considered as dangerous (cost of mistake 10000) and other mistakes cost 1. In this setting the decision boundary of q̂ significantly shifts towards classifying nearly all observations as trucks. Instances of trucks for which the model can nevertheless make a mistake become very rare. Depending on whether such an instance falls into the calibration set or into the test set, it may lead to a high cost at the test time. In Fig. 4 for many splits, DirectLoss may be better than NLL in calibration, but in one split it makes an expensive single mistake. Only by chance such case was not observed for NLL. Aslo, this was not observed in the Fungi experiment above (which also has extreme costs) presumably because deadly poisonous mushrooms are rather rare in the dataset. Empirical risk is theoretically backed up by the generalization guarantees such as Hoeffding inequality: P(|R∗(q) − R∗emp(q)| > ε) < 2e−2Nε 2/∆l2 , where N is the number of samples and ∆l is the difference between the maximum and minimum cost. This means that in order to achieve the same confidence we used to have for 0-1 cost, we need to use 108 times more samples. We therefore would like to warn the community from relying on basic statistical evaluation like in our Fungi experiment and would be happy to receive feedback on how to approach the problems associated with high costs, in particular when evaluating calibration methods. 6 CONCLUSION We have given a so-far-missing theoretical justification for post-processing recalibration methods optimizing generic criteria, in particular NLL, showing how they are related to notions of calibration. We then developed a decomposition of the risk of model-based Bayesian decision strategy and derived the respective definition of calibration from it. This approach gives a constructive way to obtain new task-specific definitions of calibration. We then improved the understanding of direct loss and margin rescaling methods for ERM. We believe these results generalize beyond our calibration setup. In the experiments we observed that calibration was important to improve the test risk and that the task-specific calibration, represented by the DirectLoss, can be more efficient (Fungi experiment, high costs). The calibration was also helpful in the lesions experiment (moderate costs), however the increase in the risk gap indicates an overfitting with DirectLoss. Finally, we demonstrated a failure case of DirectLoss and a flaw in the comparison under high costs. ETHICS STATEMENT Please be aware that neural networks can make unpredictable mistakes and produce overconfident estimates. Calibration methods, in particular the proposed one, are not guaranteed to fix these issues. They can improve statistical performance and measures of miscalibration. However, the statistics are random quantities and have to be considered very carefully, especially in the case of high costs, as we show in Section 5.3. The experiments conducted on decision making with fungi or lesion datasets should be considered only as proof of concept. REPRODUCIBILITY STATEMENT Appendix A contains proofs not included in the main paper. Appendix B contains description of datasets and details of training, calibration and testing procedures. Details of implementation can be provided to reviewers confidentially through OpenReview upon request. A PROOFS A.1 DIFFERENTIATION OF BLACKBOX COMBINATORIAL SOLVERS (DIRECT LOSS) We first detail how the general method of Vlastelica et al. (2019) is instantiated for our problem and verify that it is the gradient of the function L−λ we define in (18). A general linear combinatorial solver is formalized in Vlastelica et al. (2019) as: Solver(w) = arg min d wTφ(d), (20) where φ represents discrete choice d as a vector of the same dimension as w. And the direct loss method (Vlastelica et al., 2019, Alg.1) is given by d̂ := Solver(w); (21a) w′ := w + λ dL dφ (d̂); (21b) d̂λ := Solver(w′); (21c) ∇w := − 1 λ [ φ(d̂)− φ(d̂λ) ] . (21d) In our case the solver needs to be d̂ = arg min d f(d, x∗) = arg min d ∑ y p(y|x∗; θ)l(y, d). (22) Let π = p(·|x∗; θ). Two choices for φ qualify: 1. Let φ(d) = one_hot(d) and w ∈ RD with wd = ∑ y πyl(y, d); 2. Let φ(d)y = l(y, d) and w ∈ RK with wk = πk. Both choices lead to equivalent algorithms. We proceed with the second one for convenience as it will define the gradient in π. Our loss is L(d) = l(y∗, d), therefore dLdφy (d̂) = [[y=y ∗]]. The direct loss method specializes as follows: d̂ := arg min d ∑ y πyl(y, d); (23a) π′ := π + λ one_hot(y∗); (23b) d̂λ := arg min d ∑ y π′yl(y, d) = arg min d (∑ y πyl(y, d) + λl(y ∗, d) ) ; (23c) ∇π := − 1 λ [l(·, d̂)− l(·, d̂λ)]. (23d) This is the form we present in (17). Finally, observe that the gradient∇π in (23) matches the gradient of L−λ as defined in (18). Therefore minimizing L − λ is equivalent to the method of Vlastelica et al. (2019). Next we give a very simple proof of the upper / lower bound property of L±λ (it is extendible to the general combinatorial solver case as well). Proposition 3. Direct loss L−λ is a lower bound on the empirical loss L E and L+λ is an upper bound. Proof. We will assume that all losses are non-negative (wlog) and will show the bound property for a given training sample (x∗, y∗). Let f(d) = ∑ y p(y|x∗)l(y, d) and let d̂ = arg mind f(d). Using the inequality min d [∑ y p(y|x∗)l(y, d) + λl(y∗, d) ] ≥ 1 λ f(d̂) + λl(y∗, d̂), (24) in L− all terms cancel except of l(y∗, d̂). Similarly, using the inequality −min d [∑ y p(y|x∗)l(y, d)− λl(y∗, d) ] ≥ −f(d̂) + λl(y∗, d̂) (25) in L+ all terms cancel except of l(y∗, d̂). A.2 DERIVATION OF MARGIN RESCALING The derivation of margin rescaling approach in Tsochantaridis et al. (2005) is somewhat obscure. The reasonable starting point could be given by the SVM-like objective with slacks (but without the quadratic penalty on the weights): 1 λ min ξ,θ ∑ i ξ (26) s.t. (∀d) fi(d∗i ) ≤ fi(d)− λ(l(yi, d)− l(yi, d∗)) + ξi, where fi(d) = ∑ y p(y|xi; θ)l(y, d), (xi, yi) is the i’th training example and d∗i = arg min l(yi, d) is the optimal decision for the training example i. The constraint in this formulation requires that the model loss of the best decision fi(d∗i ) must be strictly less that the loss of any other decision fi(d) with a margin λ(l(yi, d)− l(yi, d∗)), proportional to the loss excess of the respective decision. A violation of this constraint is penalized by a slack ξi and the goal is to minimize the total slack. Notice that the constraint ensures that the slack is non-negative because for d = d∗i all terms except ξi vanish. Solving for optimal ξi in each summand, we obtain that the summand i can be expressed as LMRλ = 1 λ max d (fi(d ∗ i )− fi(d) + λ(l(yi, d)− l(yi, d∗))) (27) = 1 λ ( fi(d ∗ i )−min d (fi(d)− λ(l(yi, d)− l(yi, d∗))) ) . (28) Finally, under the assumption that costs l are non-negative and that l(yi, d∗) = 0 (which can be made without loss of generality), we obtain the formulation (19). Proposition 4. Margin re-scaling LMRλ coincides with the direct loss L + λ in the region where the classifier makes correct decisions. Furthermore LE ≤ L+λ ≤ LMR+ . Proof. The inequality L+λ ≥ 0 is already shown in Proposition 3. The proof is simple, once the two approaches are written in the respective forms that we have shown: L+λ = 1 λ ( min d f(d, x∗)−min d [ f(d, x∗)− λl(y∗, d) ]) , (29a) LMRλ = 1 λ ( f(d∗, x∗)−min d [ f(d, x∗)− λl(y∗, d) ]) . (29b) Let us verify that LMRλ ≥ L + λ . Since the summand −mind [ f(d, x∗) − λl(y∗, d) ] is common in both, the inequality follows trivially from f(d∗, x∗) ≥ min d f(d, x∗). (30) The remaining claim of the proposition is also trivial. If the decision made by classifier is correct, i.e., the optimal one, then (30) holds with equality. B EXPERIMENT DETAILS B.1 CROSS-VALIDATION PROCEDURE Given a subset of data available for calibration (in the current calibration-test split), we create 10 folds for the internal cross-validation. We used stratified folds to maintain the class balance. In each fold we have 9/10 for optimization of calibration parameters and 1/10 for validation of hyperparameters. Hyperparameters corresponding to the best average risk over the 10 folds are selected. We perform selection of the following hyper-parameters: learning rate α for all methods; λ and β for Direct Loss with smooth minimum. The chosen lambda values are then multiplied by 1/κ, where κ is the maximum value of the loss function. This is to normalize the loss function to be invariant to the scale of lambda. The search grids for different methods are shown in Table B.1. B.2 FUNGI EXPERIMENT The trained neural network for mushroom classification (Picek et al., 2022) is adapted to our decision problem (to decide the edibility of the mushrooms) as follows. There is 1604 species in the dataset, out of which we found and annotated the edibility information (6 categories) for 203 species. After this procedure the distribution of species becomes uneven, as shown in Fig. B.1. In particular deadly poisonous mushrooms are relatively rare. We adopted the ResNet-50 network from (Picek et al., 2022) as follows. From the probability vector over spices produced by the model we compute the probability vector over edibility states by marginalization. The accuracy of the model in classifying these 6 states was at 91%. Then we consider a decision problem with 6 states and 2 decisions (accept or not for cooking). We designed a realistic loss function, shown in Fig. 2 top-right. The calibration-test splits were created by using 15 stratified folds of the test set and adding the validation set of the training to the calibration set. For this decision task, we are not longer interested in the accuracy of the classification, but in the expected loss, i.e. the risk shown in Fig. 2 left. B.3 HAM10000 EXPERIMENT We tried to follow the setup of Zhao et al. (2021) in order to allow for an indirect comparison1. In particular we used the same data split and network and tried to evaluate also the gap between the model-estimated (emperical) risk and the true empirical risk. We trained resnet121 model for 100 epochs on 75% of the data. All lesions having multiple views in the dataset were used for training. The remaining 25% consisted of independent instances, each with 1 view only. The training achieved validation accuracy of 90% (the validation set was not used for choosing hyperparameters, only to report this number). The 25% of the data not used for training we split randomly into 15% for calibration and 10% for test. All splits were stratified (preserving class balance). This results in 1A direct comparison is not feasible at the moment: we evaluate only parametric calibration methods; the code and some details of their method are not available to us the test set size of 1015 data points (in each split). Fig. 3 is showing the statistical analysis over 40 splits. As each split requires a calibration (with the nested cross-validation procedure), collecting more statistics is difficult. In our cost matrix we tried to closely replicate the values depicted in Zhao et al. (2021, Fig.1) (motivated by medical domain knowledge) by matching the colors in the image and the color bar. We added a constant in each row to make all losses non-negative. This affects neither the Bayesian decision strategy nor the differences between any two risks. Pairwise comparisons for TS and VS parametrizations, complementing Fig. 3 are shown in Fig. B.3. All kernel density estimates shown are computed with awkde2 (Wang & Wang, 2007) using the default silverman adaptive method. The calibration has a positive effect in these cases as well, however the advantage for VS parametrization appears to be on the side of NLL. B.4 CIFAR-10 EXPERIMENT In this experiment we used CIFAR-10 dataset. The data splitting and calibration protocol were the same as in the fungi experiment. We trained EfficientNetB0 that achieved validation accuracy 94.7%. 2Adaptive Width KDE with Gaussian Kernels https://github.com/mennthor/awkde
1. What is the main contribution of the paper regarding calibration in decision-making? 2. What are the strengths and weaknesses of the proposed method for achieving distribution calibration? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the paper's analysis, experiments, or conclusions?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This work takes the decision making view of calibration and conducts a theoretical analysis of calibration in the context of decision cost (loss) and risk of a decision strategy (optimum of the distributional prediction w.r.t. the loss). Motivation of the problem formulation is drawn from Zhao et al. (2021), but relax the requirement of "decision calibration" by assuming a single, fixed cost function. Under several assumptions, the authors show that minimizing a proper scoring rule is equivalent minimizing distributional calibration. The property of a distribution calibrated model is the model's predicted risk of a decision strategy equals the true risk of the strategy. They further show that the true risk of the model-based Bayesian strategy is a proper scoring rule, thus propose a method of optimizing the empirical risk of this strategy as a means of achieving distribution calibration under a fixed cost function. Strengths And Weaknesses Strength: Paper is coherent and analysis looks good. This is an interesting follow-up to the concept of decision-calibration proposed by Zhao et al. (2021), and I believe it is a highly relevant problem as the community moves towards discussing the utility of calibration, rather than simply the objective of calibration. I actually appreciate the candidness of the experiments section - it clearly shows that the proposed method isn't perfect and has pitfalls. This could considered as a weakness as well, but the authors laid it out, which is rather rare in the publication culture of ML. Weakness: While the paper is overall well-written, it is rather difficult following the train of thought and process through which the conclusions are made and threaded together into a method. It would be great if the authors can place high-level summaries that can better connect each of the sections. For the experiment in Section 5.2, is it true that optimizing the calibration parameters w.r.t. NLL criterion actually performs better in terms of the evaluation metric (empirical true risk)? For the experiments in 5.3, why does no calibration look like it's performing the best? The experiments are not extensive, as only a single cost matrix is used in each case. It would be more convincing with a variety of cost matrices. What is the experiment section actually trying to show overall? It seems to make a case that the proposed method (DirectLoss) lowers the true risk of the distributional predictions. First off, I don't this is all that surprising since DirectLoss considers the cost matrix directly in the opimization, while the baselines do not. Second, based on all of the analysis previous done leading up to the experiments, I was expecting to see distribution calibration as the evaluation metric, e.g. by taking the difference between true risk and model-based risk. I see that this is shown once in Figure 3(a), but I would imagine it makes sense for distribution calibration to be the main metric? Is the proposed method limited to optimizing recalibration methods (the transformation T ) which are invertible? Clarity, Quality, Novelty And Reproducibility The writing is overall clear and the paper has overall high quality. However, it seems like there are some errors in the writing. 2 lines below equation (9): "but is guaranteed not to decrease all losses...": shouldn't the statement be the opposite? I think there is a flaw in the notation for some critical definitions early in the paper which hinders comprehension of the rest of the paper. For equation 7, based on your notation, I take the following quantity, f ( d , x ) , as: "expected loss of taking a fixed decision d given a fixed input x , where the expectation is according to the predictive model, π ." Then you write R ^ [ q ] = E [ f ( d , x ) ] and call this the model-based risk of strategy q . First of all, what is the randomness that the expectation is taken over? f ( d , x ) is not random, as both d and x are fixed and the randomness in the label Y has been averaged over in the definition of f ( d , x ) . Further, based on this definition, how is this a model-based risk of a decision strategy q ? There is no stated connection between q and d in this definition.
ICLR
Title Calibration for Decision Making via Empirical Risk Minimization Abstract Neural networks for classification can achieve high accuracy but their probabilistic predictions may be not well-calibrated, in particular overconfident. Different general calibration measures and methods were proposed. But how exactly does the calibration affect downstream tasks? We derive a new task-specific definition of calibration for the problem of statistical decision making with a known cost matrix. We then show that so-defined calibration can be theoretically rigorously improved by minimizing the empirical risk in the adjustment parameters like temperature. For the empirical risk minimization, which is not differentiable, we propose improvements to and analysis of the direct loss minimization approach. Our experiments indicate that task-specific calibration can perform better than a generic one. But we also carefully investigate weaknesses of the proposed tool and issues in the statistical evaluation for problems with highly unbalanced decision costs. 1 INTRODUCTION The notion of calibration originates in forecasting, in particular in meteorology. The following example well explains the concept. Amongst all days when a forecast was made that the chance of rain is 33%, one would expect to find that about a third of them to be rainy and two thirds sunny. If this is the case for all forecasts, the predictor is said to be well-calibrated. We would like the forecaster to be accurate, however if this is not achievable, we would like at least to have it accurately reflect what it does not know, i.e. to be calibrated. The most common model for classification in deep learning consists of a softmax predictive distribution for class labels atop of a deep architecture processing the observation. In view of the excessive number of parameters there is a natural concern whether such models learn accurate predictive probabilities p(y|x). Indeed, neural networks are typically not well calibrated (Guo et al., 2017), in particular they may be over-confident, i.e., being incorrect much more often than their high confidence suggests. Like in the example above, such over-confidence is misleading for interpreting the results. It is also commonly understood that it can mislead any downstream processing relying on the predictive probabilities. There has been therefore a substantial effort to improve the calibration. Unfortunately, even measuring the basic confidence calibration accurately in practice remains challenging (Nixon et al., 2019). In the multi-class setting a vector of class probabilities is output and all of them can be important for the downstream processing. This has led to development of more complex definitions such as distribution-calibration (Vaicenavicius et al., 2019). In this setting, despite the development of new estimators (Vaicenavicius et al., 2019; Widmann et al., 2019), it is practically infeasible to obtain a reliable estimate — there is not enough data to reject the hypothesis that the model is well calibrated. Are we lost then in the attempt to make the predictive probabilities reliable? Not necessarily. Observe that these calibration definitions, both simple and complex ones, are not considering a specific problem downstream that can be hypothetically affected by the poor probabilistic predictions. They try to address all such problems (as well as the purpose of interpretability) at once. We will argue that considering a specific downstream task allows to significantly reduce the complexity of the calibration problem, making it feasible in practice. As a specific downstream task we will consider the Bayesian decision making with a trained NN. It is well-established to train NNs for classification by optimizing the cross-entropy loss. The model architecture and the training pipeline are tuned by researchers to achieve the best generalization w.r.t. classification accuracy. However, this procedure does not take into account different costs of different classification mistakes. To give an example, mistakes in miss-classifying different mushrooms may have different costs for eating: some mistakes are between equally good spices and incur no cost while other mistakes lead to risks of poisoning. Given a finite set of decisions D and a cost matrix l(y, d), one could try to adapt the learned NN model to form the Bayesian decisions strategy q(x) achieving the smallest risk: q(x) = arg min d ∑ y p(y|x)l(y, d). (1) Such adaptation is practically desirable: it would allow one to rely on the established and tuned training approach and reuse the existing models, which may be very costly to retrain from scratch. However, because the model p(y|x) is typically inaccurate in predicting all the class probabilities, this may lead to suboptimal decisions and respectively poor outcomes of such adaptation. In particular, an overconfident model will result in both: making sub-optimal decisions and underestimating their expected cost. One could reasonably hope to learn a more accurate predictor by designing a yet better architecture and using more training data. It may nevertheless stay poorly calibrated and not suitable for the above adaptation. If the strong distribution calibration was possible to achieve as a post-processing step, the adaptation would work perfectly, however this calibration is practically not feasible. On the other hand, any weaker task-unspecific definition of calibration (e.g. class-wise calibration, Vaicenavicius et al. 2019) would not work: there would exist a decision task for which (1) would perform poorly. A formalization of calibration important for (a class of) decision making problems was proposed, only recently, by Zhao et al. (2021). As our main theoretical result, we derive a notion of calibration important for adopting strategy (1) with a given cost matrix. The formalism and tools used for that allow also to better understand the relation between existing calibration methods, in particular empirically successful temperature scaling variants (Guo et al., 2017; Alexandari et al., 2020), and the distribution calibration. Specifically, we show that these methods are guaranteed to improve the expected miscalibration as measured by the corresponding divergence. Calibrating the model w.r.t. the new definition is shown to be equivalent to minimizing the (empirical) risk of strategy (1) in the calibration parameters. As a mean of empirical risk minimization we study the direct loss approach and relate it to margin-rescaling. Experimentally, we show that task-aware calibration, using the direct loss approach, can outperform the generic calibration. However we also observe that for tasks with extremely unbalanced losses, modeling a dangerous class, we lack reliable means to assess the quality of calibration. 2 RELATED WORK We consider all methods that can improve the predictive model p(y|x) when provided some additional calibration data as calibration methods. Typically, the calibration is achieved by a postprocessing of the scores or predictive probabilities such as temperature scaling, bias-corrected temperature scaling or vector scaling. We regard these as different choices of parametrization, i.e., choice of the degrees of freedom to calibrate. Most importantly, existing calibration methods differ in the criterion they optimize. Calibration Unaware of the Task Many calibration techniques, while motivated by the notion and a particular definition of calibration, use a generic criterion unrelated to that notion. A very practical method to calibrate a model turns out to be the likelihood maximization, i.e. relying on the same criterion that is commonly used for training. This is the approach taken in Guo et al. (2017); Alexandari et al. (2020); Kull et al. (2019). Methods optimizing variants of the expected calibration error (ECE), which is a measure of miscalibration, were compared by Nixon et al. (2019), however there is no performance criterion other than ECE itself. Further variants are piece-wise parametric (Kumar et al., 2019) and kernel-based (Kumar et al., 2018) confidence calibration methods. Calibration Aware of the Task The decision-calibration of Zhao et al. (2021) takes the decision problem into the consideration. It will be discussed in detail below. Their calibration method is designed for a given decision space, but cannot make use of a specific cost matrix was it known at the calibration time. Empirical Risk Minimization Methods optimizing the empirical risk (Song et al., 2016; Vlastelica et al., 2019; Taskar et al., 2005) were used for training with complex objectives measuring performance in retrieval, ranking, or structured prediction. They have not been considered for calibration of classification models before. They address the difficulty of non-differentiability of the loss and have a potential to exploit the full information about the cost matrix. 3 BACKGROUND Let X be the space of observations and Y be the set of labels. Assume there is an underlying true joint probability distribution on X ×Y , denoted p∗. Let (X,Y ) be a a pair of random variables with the law p∗. All expectations and probabilities will be meant with respect to (X,Y ). Let ∆ denote the simplex of probabilities over Y . Let p(y|x) be a probabilistic predictor, usually a neural network with softmax output. The predictor can be considered as a mapping π : X → ∆: x 7→ p(y|x). 3.1 GENERAL CALIBRATION First works analyzing calibration in machine learning (Guo et al., 2017) were concerned only with the confidence of the model, i.e. the model’s probability of the class actually predicted. Let ŷ(x) = arg maxk πk(x) be the label predicted by the model and c(X) = maxk πk(X) the respective predictive probability, called confidence. Definition 1. The model is confidence calibrated if P ( Y=ŷ(X) | c(X) ) a.s. = c(X). (2) It requires that amongst all data points for which the prediction has confidence c the expected occurrence of the true label to match c. The respective miscalibration can be measured e.g., by the Expected Calibration Error (ECE) (Degroot & Fienberg, 1983), which is typically estimated by discretizing the probability interval into bins. Substantial efforts were put into calibrating neural networks in this sense (e.g., Naeini et al. 2015; Guo et al. 2017; Nixon et al. 2019). However, Kumar et al. (2019) argue that the binning underestimates the calibration error and in fact, an accurate estimation is possible only when the predictor π outputs only a discrete set of values. In the multi-class setting, confidence calibration may be insufficient. There may be downstream tasks which require the whole vector of predicted probabilities to be accurate. In the machine learning literature this came into attention only recently (Vaicenavicius et al., 2019). The strongest notion of calibration (Bröcker 2009, reliability Eq. 1), is as follows. Definition 2. A predictor π : X → ∆, is called distribution calibrated if (∀y ∈ Y) P ( Y=y | π(X) ) a.s. = π(X)y. (3) In words: amongst all data points of the input space where the predicted vector of probabilities is π(x) = µ the true observed class labels should be distributed as µ. Respectively, the predictor φ[π](x)y = P ( Y=y | π(X)=π(x) ) (4) is the (optimal) calibration of π: it takes the prediction π(x) and turns it into the true distribution of labels under that initial prediction. This predictor φ[π] is distribution calibrated and Definition 2 can be restated as φ[π](X) a.s.= π(X), i.e. the calibration of π is π itself. Generalizing on ECE, the expected miscalibration of π w.r.t. divergence D : ∆×∆→ R is: E[D(π(X), φ[π](X))], (5) i.e. the average divergence between the predicted distribution and its calibration. It is hard to estimate in practice, because of conditioning on a real vector π(X) = π(x) in the definition of calibration φ[π]. It becomes tricky to verify whether a model is calibrated using only a finite data sample. Different methods have been proposed based on binning of ∆ (Vaicenavicius et al., 2019) or using kernel-based divergences (Widmann et al., 2019). Unfortunately, statistical tests based on unbiased estimates (Widmann et al., 2019) were unable to reject the hypothesis that any basic neural network in a real setting, such as on MNIST data, is already calibrated. No calibration methods were proposed based on this miscalibration. 3.2 CALIBRATION FOR STATISTICAL DECISION MAKING Let us consider the classical statistical decision making problem for a known model p∗. Let D be a finite decision space. Consider the cost matrix l : Y ×D −→ R+ and a decision strategy q : X → D. The risk of the strategy q and the optimal (Bayesian) decision strategy are, respectively: R∗[q] = E [ l(Y, q(X)) ] , q∗(x) = arg min d ∑ y p∗(y|x) [ l(y, d) ] . (6) In practice, we do not have access to the true distribution p∗ to make decisions, only to the model p(y|x). Let f(d, x) = ∑ y p(y|x)l(y, d) denote the model-based conditional risk for observation x. The model-based risk and model-based Bayesian decision strategy are, respectively: R̂[q] = E [ f(d, x) ] , q̂(x) = arg min d f(d, x). (7) If the model p was distribution-calibrated, these two risks would coincide: R̂[q] = R∗[q] for any strategy q and any cost matrix (Zhao et al., 2021). However, as was discussed above, distribution calibration is hard even to measure in practice. This has led to the following definition. Definition 3 (Zhao et al. 2021). For a set of cost matrices L and a set of strategies Q, the predictor π is called (L, Q)-decision calibrated if for all l ∈ L and q ∈ Q the model risk matches the true risk: R̂[q] = R∗[q]. Zhao et al. (2021) show that this definition generalizes previous notions of calibration by specifying the corresponding statistical decision problems. In particular confidence calibration corresponds to recognition with the reject option with a varied cost of rejecting. The distribution calibration can be understood as (L, Q)-decision calibration for all possible loss functions and decision strategies over all possible decision spaces D, which is clearly far too general. It follows from the definition that a decision-calibrated model must accurately estimate the true risk and that the model-based strategy q̂ is the optimum of the true risk over Q. Therefore (L,Q)decision calibration is sufficient for any statistical decision making task with l ∈ L and q ∈ Q. 4 METHOD The condition of (L, Q) decision calibration (Zhao et al., 2021) is still unnecessarily stringent if we have a specific fixed cost matrix l and are interested in the performance of only one particular decision strategy: the model-based Bayesian strategy q̂, (7). Their calibration algorithm is derived under the assumption that L is the set of all cost matrices of bounded norm over a fixed decision space and thus cannot be chose as e.g. L = {l}. We will show that the minimization of the risk of the model-based strategy, R∗[q̂], can improve a precise measure of calibration under a known cost matrix while also obviously not compromising on the task-specific performance metric, which is the risk R∗[q̂] itself. 4.1 CALIBRATION VIA LOSS MINIMIZATION Bröcker (2009) showed that any loss function corresponding to a proper scoring rule satisfies a decomposition into uncertainty, resolution (sharpness) and reliability (miscalibration). A scoring rule S is a function ∆ × Y → R and the expected score, which we call loss for brevity, is L[π] = E[S(π(X), Y )]. For example, the negative log likelihood loss (NLL) corresponds to the scoring rule S(π, y) = − log πy . The decomposition reads L[π] = H(π̄)︸ ︷︷ ︸ uncertainty of Y −E [ D(π̄, φ[π](X)) ]︸ ︷︷ ︸ resolution of π +E [ D(π(X), φ[π](X)) ]︸ ︷︷ ︸ reliability of π , (8) where π̄ is the a priori distribution of labels: π̄y = p∗(y), φ[π] is the calibration of π (4), and H and D are particular entropy and the divergence functions corresponding to the score S. In case of NLL, they are the Shannon entropy and the Kullback–Leibler divergence. Prominently, the reliability term in this decomposition is exactly the expected miscalibration (5) w.r.t. the score-specific divergence D. If we substitute φ[π] as a predictor, we will find out that it has a zero expected miscalibration while the first two terms remain the same: L[φ[π]] = H(π̄)− E [ D(π̄, φ[π](X)) ] ≤ L[π], (9) where the equality uses the fact that φ[φ[π]] = φ[π] and the inequality is due to divergence being always non-negative. Thus φ[π] not only achieves distribution calibration but also is guaranteed not to decrease all losses corresponding to proper scoring rules. This sheds some light on why optimizing NLL is good for calibration as evidenced, e.g., by Guo et al. 2017; Alexandari et al. 2020, in particular improving ECE. Calibration methods often fit a parametric post-processing of a predictor π, such as temperature scaling (Guo et al., 2017). They argue about calibration but optimize NLL. We formally show why this is a perfectly correct idea. Theorem 1. Let π : X → ∆ be a predictor and Tθ : ∆ → ∆ a parametric mapping, invertible for each θ ∈ Θ. Finding an adjusted predictor πθ = Tθ ◦ π minimizing the expected miscalibration is equivalent to minimizing the loss: minθ∈Θ E [ D(πθ, φ[πθ]) ] = minθ∈Θ L[πθ]. (10) Proof. First we show that φ[T ◦ π] is invariant of T for any invertible T . The events T (π(X)) = T (π(x)) and π(X) = π(x) are equal, therefore φ[T ◦ π](x)y = P ( Y=y | T (π(X))=T (π(x)) ) = φ[π](x)y. (11) It follows that D(π̄, φ[T ◦ π](X)) = D(π̄, φ[π](X)). Therefore the first two terms of the decomposition stay the same for any θ. Therefore minimizing the whole loss over θ ∈ Θ is equivalent to minimizing the reliability term alone. This allows to overcome the general difficulty of estimating the expected miscalibration by simply using the empirical estimate of the loss! In particular, no binning of the simplex ∆ is involved. 4.2 DECOMPOSITION OF THE RISK We observe that the true risk of the model-based strategy R∗[q̂] also corresponds to a proper scoring rule and thus can be decomposed according to the theory. Proposition 1. The following scoring rule corresponds to the loss of the model-based decision: S(π, y) = l(y, arg mind ∑ y πyl(y, d)). (12) For two probability distributions π, ρ in ∆, Bröcker (2009) defines the following scoring function s, divergence D and entropy H: s(π, ρ) = ∑ y S(π, y)ρy; D(π, ρ) = s(π, ρ)− s(ρ, ρ); H(ρ) = s(ρ, ρ). (13) In our case, the score s(π(x), p∗(·|x)) is the conditional risk of the prediction q̂(x) and its expectation is the risk of the strategy q̂: E[S(π(X), Y )] = R∗[q̂]. Proposition 2. The score s is proper (Bröcker, 2009), i.e., the “divergence” D is non-negative. Proof. By definition, s(ρ, ρ) = ∑ y S(ρ, y)ρy = ∑ y ρyl(y, arg min d ∑ y ρyl(y, d)) = mind ∑ y ρyl(y, d). (14) Clearly it satisfies s(ρ, ρ) ≤ ∑ y ρyl(y, d̂) for any d̂, in particular d̂ = arg mind ∑ y πyl(y, d). Corollary 1. The decomposition (8) holds for the risk R∗[q̂]. The uncertainty term H(π̄) = mind ∑ y p ∗(y)l(y, d) is just the lowest risk attainable without considering observations. Let us discuss the reliability term. In our case D is not a true divergence as it may vanish even if the two distributions are different. The reliability term is therefore more permissive. This is appropriate, indeed, if e.g., the cost matrix has two identical rows, there is no need to distinguish the respective classes in the prediction and, respectively, no need to have the correct individual predictive probabilities for them. This motivates us to define the task-specific calibration accordingly: Definition 4. Given a cost matrix l and “divergence” Dl defined by (13), a predictor π(X) is ldecision calibrated if Dl ( π(X), φ[π](X) ) a.s. = 0. (15) For any proper divergence, this definition would be equivalent to the distribution calibration in Definition 2. The selectivity of Dl in penalizing differences in the distribution which matter for the decision task is what makes it task-specific. It appears hard to estimate this miscalibration in general as it still involves φ[π]. However, using Theorem 1 we can improve this task-specific calibration in parametric settings (e.g. temperature scaling) by simply minimizing the empirical risk of q̂. 4.3 EMPIRICAL RISK MINIMIZATION Consider a parametric predictor π(x)y = p(y|x; θ) and let f(d, x; θ) = ∑ y p(y|x; θ)l(y, d) as before. Given a sample (xi, yi)Ni=1 from p ∗ the empirical risk minimization for model-based Bayesian strategy reads: minθ 1 N ∑ i l(yi, di) s.t. di = arg min d f(d, xi; θ). (16) This problem is difficult because it is a so called bi-level optimization problem which has discrete decision of the inner problem and a non-linear dependence on θ. Such formulations, where the inner problem corresponds to a general predictor based on solving a combinatorial optimization problem have been studied. Two methods that have been applied to this kind of problems are large margin methods (Tsochantaridis et al., 2005; Taskar et al., 2005) and the direct loss / combinatorial black box minimization (Song et al., 2016; Vlastelica et al., 2019). 4.3.1 DIRECT LOSS AND MARGIN RESCALING The empirical risk can be easily evaluated but cannot be differentiated because of arg min. This arg min over the set of labels can be considered as a small combinatorial solver. We will specialize and analyze direct loss method (Song et al., 2016; Vlastelica et al., 2019) for this case. For simplicity, let us consider a single training sample (x∗, y∗) (with multiple training samples losses and gradients sum up). Let us denote the vector of class probabilities π = p(·|x∗; θ). The estimate of the gradient in π according to the direct loss minimization approach is constructed as follows: d̂ = arg min d f(d, x∗); d̂λ = arg min d [ f(d, x∗)+λl(y∗, d) ] ; ∇̂π := 1λ [l(·, d̂λ)− l(·, d̂)]. (17) Appendix A.1 gives details on how this is obtained from the general method of Vlastelica et al. (2019). The gradient in θ can then be computed by the chain rule. Here d̂ is the solution of the solver (the Bayesian decision) and d̂λ is the decision of a perturbed problem. The strength of the perturbation is controlled by λ. Song et al. (2016) has shown that in the limit λ → 0 the gradient of the expected loss over a continuous data distribution matches E[∇π]. In this limit, stochastic descent with ∇π would directly minimize the (expectation of non-differentiable) loss, which was termed direct loss minimization. However, these arguments are not applicable to a finite training sample. In practice, λ needs to be sufficiently large for ∇π to be non-zero for some data points, at least. In this setting we are not longer minimizing the original loss. However one can define a surrogate loss function such that (17) is its true gradient. We call it the direct loss, so the method can now be validly interpreted as direct loss minimization: L±λ = ± 1 λ ( mind f(d, x ∗)−mind [ f(d, x∗)∓ λl(y∗, d) ]) , (18) where ∓ is paired with ±. Vlastelica et al. (2019) advocate the use of a large λ, define a similar surrogate loss to L−λ and show that it is a lower bound on the empirical loss for positive λ (Observation 3), where the empirical loss is LE = l(y∗, arg mind f(d, x ∗)). Note that there holds L±−λ = L ∓ λ and therefore we can always assume λ > 0 in order to avoid redundancy. We show the following. Proposition 3. Direct loss L−λ is a lower bound on the empirical loss L E and L+λ is an upper bound. The proof is given in Appendix A.1. It follows that the expectation over the training data (resp. true distribution p∗) of L+λ is an upper bound on the empirical risk (resp. true risk). Relation to Margin Rescaling The problem of minimizing empirical risk over discrete strategies of the form arg miny f(y; θ) was also studied in structural prediction (Tsochantaridis et al., 2005; Taskar et al., 2005). One of the most common approaches is called margin re-scaling (Tsochantaridis et al., 2005) and was successfully used in combination with deep networks as well (e.g. Knöbelreiter et al. 2017). Like SVM, it puts a hinge loss on the violation of the classification constraints with the margin proportional to the respective loss. We can show (see Appendix A.2) that the margin re-scaling approach leads to the following surrogate loss: LMRλ = 1 λ ( f(d∗, x∗)−mind [ f(d, x∗)− λl(y∗, d) ]) , (19) where d∗ = arg mind l(y ∗, d) is the best decision given the true class label. Written in this form there is a striking similarity to (18). The only difference being that d∗ is the best decision for a loss (knowing the true label) rather than the best decision based on the model (not knowing the true label). This leads to that margin rescaling is a less tight upper bound. Proposition 4. Margin re-scaling LMRλ coincides with the direct loss L + λ in the region where the classifier makes correct decisions. Furthermore LE ≤ L+λ ≤ LMR+ . The proof is given in Appendix A.2. We believe this connection has not been known before. The two surrogate losses are illustrated in Fig. 1. For both approaches, if λ is small, the size of the margin is small and there is a flat region with zero gradient. As a simple remedy we propose to smooth the minimum in (18) using the smooth minimum function minβ(x) = − 1β log ∑ k e −βxk , where the smoothing degree is controlled by β. 5 EXPERIMENTS In the experiments, we compare different calibration criteria for the same choice of a parametric family. Assuming that networks outputs scores s (or elsewise let sy = log p(y|x)), we consider the following common choices to parametrize the corrected predictor π. TS: Temperature Scaling (Guo et al., 2017): π = softmax(s/T ), where T is a (non-negative) scalar temperature to calibrate. BCTS: Bias-Corrected Temperature Scaling (Alexandari et al., 2020): π = softmax((s + b)/T ), where additionally b is a vector of per-class biases to calibrate. VS: Vector Scaling (Alexandari et al., 2020): π = softmax(s w+ b), where w is a vector of scaling factors, b is a vector of biases and is the coordinate-wise product. We optimize each criterion in the above parameters using Adam optimizer. In order to find hyperparameters (learning rate, λ, β) we use the nested cross-validation procedure detailed in Appendix B. 5.1 FUNGI EDIBILITY (DANISH FUNGI 2020) In this experiment we consider a decision problem of whether to cook (eat) a mushroom given its predicted edibility category, based on the Danish fungi dataset (Picek et al., 2022). In order to compare calibration methods, we create 15 folds of the data that was not used during training into calibration and test parts. Full details can be found in Appendix B. The cost matrix and obtained results are shown in Fig. 2. Calibration with the DirectLoss criterion achieved a lower average test risk in all parametrizations, notably performing well also in the VS parametrization where other criteria performed the worst. The improvement over other methods can be considered statistically significant if one trusts the estimates of the mean and the variance (see below). 5.2 SKIN CANCER LESION TREATMENT (HAM10000) In this experiment, we consider a decision problem of whether to assign a treatment given the lesion classification using skin lesion dataset (Tschandl, 2018). The training is performed on 75% of the data for 100 epochs. From the remaining data we create 100 random splits into calibration (15%) and test (10%). Fig. 3(a) shows that the network significantly underestimates the true risk. After calibration (DirectLoss BCTS), the risk decreases, but the risk gap increases for some data splits. Indeed, with our calibration and optimization criterion being the empirical risk, there is no requirement that this gap should be made small or even decrease. Nevertheless, such increase in the gap is unexpected of a calibration method and might indicate overfitting. Fig. 3 (a,b) show statistics of the differences between pairs: No calibration - DirectLoss and NLL - DirectLoss, confirming that calibration is helpful, but unable to tell whether NLL or DirectLoss is a better calibration objective. Comparisons for TS and VS parametrizations are shown in Figs. B.2 and B.3. 5.3 RARE EXPENSIVE MISTAKES We present a failure mode of calibration on the example of CIRAF10 dataset with the trucks class considered as dangerous (cost of mistake 10000) and other mistakes cost 1. In this setting the decision boundary of q̂ significantly shifts towards classifying nearly all observations as trucks. Instances of trucks for which the model can nevertheless make a mistake become very rare. Depending on whether such an instance falls into the calibration set or into the test set, it may lead to a high cost at the test time. In Fig. 4 for many splits, DirectLoss may be better than NLL in calibration, but in one split it makes an expensive single mistake. Only by chance such case was not observed for NLL. Aslo, this was not observed in the Fungi experiment above (which also has extreme costs) presumably because deadly poisonous mushrooms are rather rare in the dataset. Empirical risk is theoretically backed up by the generalization guarantees such as Hoeffding inequality: P(|R∗(q) − R∗emp(q)| > ε) < 2e−2Nε 2/∆l2 , where N is the number of samples and ∆l is the difference between the maximum and minimum cost. This means that in order to achieve the same confidence we used to have for 0-1 cost, we need to use 108 times more samples. We therefore would like to warn the community from relying on basic statistical evaluation like in our Fungi experiment and would be happy to receive feedback on how to approach the problems associated with high costs, in particular when evaluating calibration methods. 6 CONCLUSION We have given a so-far-missing theoretical justification for post-processing recalibration methods optimizing generic criteria, in particular NLL, showing how they are related to notions of calibration. We then developed a decomposition of the risk of model-based Bayesian decision strategy and derived the respective definition of calibration from it. This approach gives a constructive way to obtain new task-specific definitions of calibration. We then improved the understanding of direct loss and margin rescaling methods for ERM. We believe these results generalize beyond our calibration setup. In the experiments we observed that calibration was important to improve the test risk and that the task-specific calibration, represented by the DirectLoss, can be more efficient (Fungi experiment, high costs). The calibration was also helpful in the lesions experiment (moderate costs), however the increase in the risk gap indicates an overfitting with DirectLoss. Finally, we demonstrated a failure case of DirectLoss and a flaw in the comparison under high costs. ETHICS STATEMENT Please be aware that neural networks can make unpredictable mistakes and produce overconfident estimates. Calibration methods, in particular the proposed one, are not guaranteed to fix these issues. They can improve statistical performance and measures of miscalibration. However, the statistics are random quantities and have to be considered very carefully, especially in the case of high costs, as we show in Section 5.3. The experiments conducted on decision making with fungi or lesion datasets should be considered only as proof of concept. REPRODUCIBILITY STATEMENT Appendix A contains proofs not included in the main paper. Appendix B contains description of datasets and details of training, calibration and testing procedures. Details of implementation can be provided to reviewers confidentially through OpenReview upon request. A PROOFS A.1 DIFFERENTIATION OF BLACKBOX COMBINATORIAL SOLVERS (DIRECT LOSS) We first detail how the general method of Vlastelica et al. (2019) is instantiated for our problem and verify that it is the gradient of the function L−λ we define in (18). A general linear combinatorial solver is formalized in Vlastelica et al. (2019) as: Solver(w) = arg min d wTφ(d), (20) where φ represents discrete choice d as a vector of the same dimension as w. And the direct loss method (Vlastelica et al., 2019, Alg.1) is given by d̂ := Solver(w); (21a) w′ := w + λ dL dφ (d̂); (21b) d̂λ := Solver(w′); (21c) ∇w := − 1 λ [ φ(d̂)− φ(d̂λ) ] . (21d) In our case the solver needs to be d̂ = arg min d f(d, x∗) = arg min d ∑ y p(y|x∗; θ)l(y, d). (22) Let π = p(·|x∗; θ). Two choices for φ qualify: 1. Let φ(d) = one_hot(d) and w ∈ RD with wd = ∑ y πyl(y, d); 2. Let φ(d)y = l(y, d) and w ∈ RK with wk = πk. Both choices lead to equivalent algorithms. We proceed with the second one for convenience as it will define the gradient in π. Our loss is L(d) = l(y∗, d), therefore dLdφy (d̂) = [[y=y ∗]]. The direct loss method specializes as follows: d̂ := arg min d ∑ y πyl(y, d); (23a) π′ := π + λ one_hot(y∗); (23b) d̂λ := arg min d ∑ y π′yl(y, d) = arg min d (∑ y πyl(y, d) + λl(y ∗, d) ) ; (23c) ∇π := − 1 λ [l(·, d̂)− l(·, d̂λ)]. (23d) This is the form we present in (17). Finally, observe that the gradient∇π in (23) matches the gradient of L−λ as defined in (18). Therefore minimizing L − λ is equivalent to the method of Vlastelica et al. (2019). Next we give a very simple proof of the upper / lower bound property of L±λ (it is extendible to the general combinatorial solver case as well). Proposition 3. Direct loss L−λ is a lower bound on the empirical loss L E and L+λ is an upper bound. Proof. We will assume that all losses are non-negative (wlog) and will show the bound property for a given training sample (x∗, y∗). Let f(d) = ∑ y p(y|x∗)l(y, d) and let d̂ = arg mind f(d). Using the inequality min d [∑ y p(y|x∗)l(y, d) + λl(y∗, d) ] ≥ 1 λ f(d̂) + λl(y∗, d̂), (24) in L− all terms cancel except of l(y∗, d̂). Similarly, using the inequality −min d [∑ y p(y|x∗)l(y, d)− λl(y∗, d) ] ≥ −f(d̂) + λl(y∗, d̂) (25) in L+ all terms cancel except of l(y∗, d̂). A.2 DERIVATION OF MARGIN RESCALING The derivation of margin rescaling approach in Tsochantaridis et al. (2005) is somewhat obscure. The reasonable starting point could be given by the SVM-like objective with slacks (but without the quadratic penalty on the weights): 1 λ min ξ,θ ∑ i ξ (26) s.t. (∀d) fi(d∗i ) ≤ fi(d)− λ(l(yi, d)− l(yi, d∗)) + ξi, where fi(d) = ∑ y p(y|xi; θ)l(y, d), (xi, yi) is the i’th training example and d∗i = arg min l(yi, d) is the optimal decision for the training example i. The constraint in this formulation requires that the model loss of the best decision fi(d∗i ) must be strictly less that the loss of any other decision fi(d) with a margin λ(l(yi, d)− l(yi, d∗)), proportional to the loss excess of the respective decision. A violation of this constraint is penalized by a slack ξi and the goal is to minimize the total slack. Notice that the constraint ensures that the slack is non-negative because for d = d∗i all terms except ξi vanish. Solving for optimal ξi in each summand, we obtain that the summand i can be expressed as LMRλ = 1 λ max d (fi(d ∗ i )− fi(d) + λ(l(yi, d)− l(yi, d∗))) (27) = 1 λ ( fi(d ∗ i )−min d (fi(d)− λ(l(yi, d)− l(yi, d∗))) ) . (28) Finally, under the assumption that costs l are non-negative and that l(yi, d∗) = 0 (which can be made without loss of generality), we obtain the formulation (19). Proposition 4. Margin re-scaling LMRλ coincides with the direct loss L + λ in the region where the classifier makes correct decisions. Furthermore LE ≤ L+λ ≤ LMR+ . Proof. The inequality L+λ ≥ 0 is already shown in Proposition 3. The proof is simple, once the two approaches are written in the respective forms that we have shown: L+λ = 1 λ ( min d f(d, x∗)−min d [ f(d, x∗)− λl(y∗, d) ]) , (29a) LMRλ = 1 λ ( f(d∗, x∗)−min d [ f(d, x∗)− λl(y∗, d) ]) . (29b) Let us verify that LMRλ ≥ L + λ . Since the summand −mind [ f(d, x∗) − λl(y∗, d) ] is common in both, the inequality follows trivially from f(d∗, x∗) ≥ min d f(d, x∗). (30) The remaining claim of the proposition is also trivial. If the decision made by classifier is correct, i.e., the optimal one, then (30) holds with equality. B EXPERIMENT DETAILS B.1 CROSS-VALIDATION PROCEDURE Given a subset of data available for calibration (in the current calibration-test split), we create 10 folds for the internal cross-validation. We used stratified folds to maintain the class balance. In each fold we have 9/10 for optimization of calibration parameters and 1/10 for validation of hyperparameters. Hyperparameters corresponding to the best average risk over the 10 folds are selected. We perform selection of the following hyper-parameters: learning rate α for all methods; λ and β for Direct Loss with smooth minimum. The chosen lambda values are then multiplied by 1/κ, where κ is the maximum value of the loss function. This is to normalize the loss function to be invariant to the scale of lambda. The search grids for different methods are shown in Table B.1. B.2 FUNGI EXPERIMENT The trained neural network for mushroom classification (Picek et al., 2022) is adapted to our decision problem (to decide the edibility of the mushrooms) as follows. There is 1604 species in the dataset, out of which we found and annotated the edibility information (6 categories) for 203 species. After this procedure the distribution of species becomes uneven, as shown in Fig. B.1. In particular deadly poisonous mushrooms are relatively rare. We adopted the ResNet-50 network from (Picek et al., 2022) as follows. From the probability vector over spices produced by the model we compute the probability vector over edibility states by marginalization. The accuracy of the model in classifying these 6 states was at 91%. Then we consider a decision problem with 6 states and 2 decisions (accept or not for cooking). We designed a realistic loss function, shown in Fig. 2 top-right. The calibration-test splits were created by using 15 stratified folds of the test set and adding the validation set of the training to the calibration set. For this decision task, we are not longer interested in the accuracy of the classification, but in the expected loss, i.e. the risk shown in Fig. 2 left. B.3 HAM10000 EXPERIMENT We tried to follow the setup of Zhao et al. (2021) in order to allow for an indirect comparison1. In particular we used the same data split and network and tried to evaluate also the gap between the model-estimated (emperical) risk and the true empirical risk. We trained resnet121 model for 100 epochs on 75% of the data. All lesions having multiple views in the dataset were used for training. The remaining 25% consisted of independent instances, each with 1 view only. The training achieved validation accuracy of 90% (the validation set was not used for choosing hyperparameters, only to report this number). The 25% of the data not used for training we split randomly into 15% for calibration and 10% for test. All splits were stratified (preserving class balance). This results in 1A direct comparison is not feasible at the moment: we evaluate only parametric calibration methods; the code and some details of their method are not available to us the test set size of 1015 data points (in each split). Fig. 3 is showing the statistical analysis over 40 splits. As each split requires a calibration (with the nested cross-validation procedure), collecting more statistics is difficult. In our cost matrix we tried to closely replicate the values depicted in Zhao et al. (2021, Fig.1) (motivated by medical domain knowledge) by matching the colors in the image and the color bar. We added a constant in each row to make all losses non-negative. This affects neither the Bayesian decision strategy nor the differences between any two risks. Pairwise comparisons for TS and VS parametrizations, complementing Fig. 3 are shown in Fig. B.3. All kernel density estimates shown are computed with awkde2 (Wang & Wang, 2007) using the default silverman adaptive method. The calibration has a positive effect in these cases as well, however the advantage for VS parametrization appears to be on the side of NLL. B.4 CIFAR-10 EXPERIMENT In this experiment we used CIFAR-10 dataset. The data splitting and calibration protocol were the same as in the fungi experiment. We trained EfficientNetB0 that achieved validation accuracy 94.7%. 2Adaptive Width KDE with Gaussian Kernels https://github.com/mennthor/awkde
1. What is the focus of the paper regarding probabilistic model recalibration? 2. What are the strengths and weaknesses of the proposed surrogate loss and margin rescaling approach? 3. How does the reviewer assess the paper's clarity, quality, novelty, and reproducibility? 4. Are there any concerns or questions regarding the paper's contribution compared to prior works? 5. Can the proposed method be adapted to practical scenarios, and how does it compare to other calibration methods?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper considers the problem of calibrating a probabilistic model respective to a specific cost matrix and decision rule. In particular, the authors propose a surrogate loss that can directly minimise the expected costs so that the model can be recalibrated without optimising the predicted probabilities. Strengths And Weaknesses (+) Combining the decision costs in the classifier calibration procedure brings many interesting problems. The proposed surrogate loss and margin rescaling provide a sounding solution to optimise the empirical risk with gradient-based optimisers. (-) The claimed contribution of linking probability calibration and NLL has been illustrated in the previous work (https://www.stat.washington.edu/raftery/Research/PDF/Gneiting2007jasa.pdf, https://link.springer.com/chapter/10.1007/978-3-319-23528-8_5). In particular, the later paper has provided an additional decomposition of any PSR divergence into irreducible loss, grouping loss and calibration loss. That paper shows that minimising the NLL will improve calibration if and only if the group loss is the same. This result corresponds to Theorem 1 in the current paper, where the authors require the map to be invertible and hence doesn't introduce additional grouping loss. (-) While the surrogate loss solution is interesting and sounding, the motivation for calibrating a model wrt a fixed cost matrix might be less practical in real-life scenarios. One of the critical benefits of calibrating the model at the probability level is that it can then be adapted to any potential cost matrix without retraining the model or the calibrator. This is one of the reasons the work (Zhao et al. 2021) actually considers supporting an unspecified cost matrix in the first place. As a result, the proposed definition of being l-decision calibrated seems to be a weaker notion of calibration compared to the standard definition of multi-class calibrated (distribution calibrated) and the decision-calibrated in (Zhao et al .2021) Clarity, Quality, Novelty And Reproducibility Clarity (4/5) This paper should be easy to follow for readers with a similar background. Most mathematical notations are clearly defined. Some nested expressions might take some time to follow. Quality (3/5) This paper reasonably covers existing work in the field of probability calibration. Some related work (as mentioned above) might require further addressing. Experiment-wise, while the current experiment illustrates the benefits of the proposed method, some details might need further clarification. (1) The authors suggest one of the baseline methods was trained by ECE, which is a non-differentiable measure. No details are given on how this is achieved. (2) The results are only illustrated via the empirical test risk. Given the proposed l-decision calibration is a weaker notation than multi-class calibration and is not aiming to provide accurate probabilities. Additional measures like the reliability diagram and NLL / Brier score might also be helpful to ensure there are no unwanted side effects of the proposed method. Novelty (2/5) As discussed above, the novelty of this paper is compromised as one of the key contributions in this work can be observed with early work. Also, the framework to build the surrogate loss also seems to be heavily built upon the work of (Vlastelica et al. 2019) Reproducibility (4/5) This paper is quite clear on its method, and experiments are run on public datasets, so there should be no significant issue reproducing the main results. Providing access to the source code should make it better.
ICLR
Title Calibration for Decision Making via Empirical Risk Minimization Abstract Neural networks for classification can achieve high accuracy but their probabilistic predictions may be not well-calibrated, in particular overconfident. Different general calibration measures and methods were proposed. But how exactly does the calibration affect downstream tasks? We derive a new task-specific definition of calibration for the problem of statistical decision making with a known cost matrix. We then show that so-defined calibration can be theoretically rigorously improved by minimizing the empirical risk in the adjustment parameters like temperature. For the empirical risk minimization, which is not differentiable, we propose improvements to and analysis of the direct loss minimization approach. Our experiments indicate that task-specific calibration can perform better than a generic one. But we also carefully investigate weaknesses of the proposed tool and issues in the statistical evaluation for problems with highly unbalanced decision costs. 1 INTRODUCTION The notion of calibration originates in forecasting, in particular in meteorology. The following example well explains the concept. Amongst all days when a forecast was made that the chance of rain is 33%, one would expect to find that about a third of them to be rainy and two thirds sunny. If this is the case for all forecasts, the predictor is said to be well-calibrated. We would like the forecaster to be accurate, however if this is not achievable, we would like at least to have it accurately reflect what it does not know, i.e. to be calibrated. The most common model for classification in deep learning consists of a softmax predictive distribution for class labels atop of a deep architecture processing the observation. In view of the excessive number of parameters there is a natural concern whether such models learn accurate predictive probabilities p(y|x). Indeed, neural networks are typically not well calibrated (Guo et al., 2017), in particular they may be over-confident, i.e., being incorrect much more often than their high confidence suggests. Like in the example above, such over-confidence is misleading for interpreting the results. It is also commonly understood that it can mislead any downstream processing relying on the predictive probabilities. There has been therefore a substantial effort to improve the calibration. Unfortunately, even measuring the basic confidence calibration accurately in practice remains challenging (Nixon et al., 2019). In the multi-class setting a vector of class probabilities is output and all of them can be important for the downstream processing. This has led to development of more complex definitions such as distribution-calibration (Vaicenavicius et al., 2019). In this setting, despite the development of new estimators (Vaicenavicius et al., 2019; Widmann et al., 2019), it is practically infeasible to obtain a reliable estimate — there is not enough data to reject the hypothesis that the model is well calibrated. Are we lost then in the attempt to make the predictive probabilities reliable? Not necessarily. Observe that these calibration definitions, both simple and complex ones, are not considering a specific problem downstream that can be hypothetically affected by the poor probabilistic predictions. They try to address all such problems (as well as the purpose of interpretability) at once. We will argue that considering a specific downstream task allows to significantly reduce the complexity of the calibration problem, making it feasible in practice. As a specific downstream task we will consider the Bayesian decision making with a trained NN. It is well-established to train NNs for classification by optimizing the cross-entropy loss. The model architecture and the training pipeline are tuned by researchers to achieve the best generalization w.r.t. classification accuracy. However, this procedure does not take into account different costs of different classification mistakes. To give an example, mistakes in miss-classifying different mushrooms may have different costs for eating: some mistakes are between equally good spices and incur no cost while other mistakes lead to risks of poisoning. Given a finite set of decisions D and a cost matrix l(y, d), one could try to adapt the learned NN model to form the Bayesian decisions strategy q(x) achieving the smallest risk: q(x) = arg min d ∑ y p(y|x)l(y, d). (1) Such adaptation is practically desirable: it would allow one to rely on the established and tuned training approach and reuse the existing models, which may be very costly to retrain from scratch. However, because the model p(y|x) is typically inaccurate in predicting all the class probabilities, this may lead to suboptimal decisions and respectively poor outcomes of such adaptation. In particular, an overconfident model will result in both: making sub-optimal decisions and underestimating their expected cost. One could reasonably hope to learn a more accurate predictor by designing a yet better architecture and using more training data. It may nevertheless stay poorly calibrated and not suitable for the above adaptation. If the strong distribution calibration was possible to achieve as a post-processing step, the adaptation would work perfectly, however this calibration is practically not feasible. On the other hand, any weaker task-unspecific definition of calibration (e.g. class-wise calibration, Vaicenavicius et al. 2019) would not work: there would exist a decision task for which (1) would perform poorly. A formalization of calibration important for (a class of) decision making problems was proposed, only recently, by Zhao et al. (2021). As our main theoretical result, we derive a notion of calibration important for adopting strategy (1) with a given cost matrix. The formalism and tools used for that allow also to better understand the relation between existing calibration methods, in particular empirically successful temperature scaling variants (Guo et al., 2017; Alexandari et al., 2020), and the distribution calibration. Specifically, we show that these methods are guaranteed to improve the expected miscalibration as measured by the corresponding divergence. Calibrating the model w.r.t. the new definition is shown to be equivalent to minimizing the (empirical) risk of strategy (1) in the calibration parameters. As a mean of empirical risk minimization we study the direct loss approach and relate it to margin-rescaling. Experimentally, we show that task-aware calibration, using the direct loss approach, can outperform the generic calibration. However we also observe that for tasks with extremely unbalanced losses, modeling a dangerous class, we lack reliable means to assess the quality of calibration. 2 RELATED WORK We consider all methods that can improve the predictive model p(y|x) when provided some additional calibration data as calibration methods. Typically, the calibration is achieved by a postprocessing of the scores or predictive probabilities such as temperature scaling, bias-corrected temperature scaling or vector scaling. We regard these as different choices of parametrization, i.e., choice of the degrees of freedom to calibrate. Most importantly, existing calibration methods differ in the criterion they optimize. Calibration Unaware of the Task Many calibration techniques, while motivated by the notion and a particular definition of calibration, use a generic criterion unrelated to that notion. A very practical method to calibrate a model turns out to be the likelihood maximization, i.e. relying on the same criterion that is commonly used for training. This is the approach taken in Guo et al. (2017); Alexandari et al. (2020); Kull et al. (2019). Methods optimizing variants of the expected calibration error (ECE), which is a measure of miscalibration, were compared by Nixon et al. (2019), however there is no performance criterion other than ECE itself. Further variants are piece-wise parametric (Kumar et al., 2019) and kernel-based (Kumar et al., 2018) confidence calibration methods. Calibration Aware of the Task The decision-calibration of Zhao et al. (2021) takes the decision problem into the consideration. It will be discussed in detail below. Their calibration method is designed for a given decision space, but cannot make use of a specific cost matrix was it known at the calibration time. Empirical Risk Minimization Methods optimizing the empirical risk (Song et al., 2016; Vlastelica et al., 2019; Taskar et al., 2005) were used for training with complex objectives measuring performance in retrieval, ranking, or structured prediction. They have not been considered for calibration of classification models before. They address the difficulty of non-differentiability of the loss and have a potential to exploit the full information about the cost matrix. 3 BACKGROUND Let X be the space of observations and Y be the set of labels. Assume there is an underlying true joint probability distribution on X ×Y , denoted p∗. Let (X,Y ) be a a pair of random variables with the law p∗. All expectations and probabilities will be meant with respect to (X,Y ). Let ∆ denote the simplex of probabilities over Y . Let p(y|x) be a probabilistic predictor, usually a neural network with softmax output. The predictor can be considered as a mapping π : X → ∆: x 7→ p(y|x). 3.1 GENERAL CALIBRATION First works analyzing calibration in machine learning (Guo et al., 2017) were concerned only with the confidence of the model, i.e. the model’s probability of the class actually predicted. Let ŷ(x) = arg maxk πk(x) be the label predicted by the model and c(X) = maxk πk(X) the respective predictive probability, called confidence. Definition 1. The model is confidence calibrated if P ( Y=ŷ(X) | c(X) ) a.s. = c(X). (2) It requires that amongst all data points for which the prediction has confidence c the expected occurrence of the true label to match c. The respective miscalibration can be measured e.g., by the Expected Calibration Error (ECE) (Degroot & Fienberg, 1983), which is typically estimated by discretizing the probability interval into bins. Substantial efforts were put into calibrating neural networks in this sense (e.g., Naeini et al. 2015; Guo et al. 2017; Nixon et al. 2019). However, Kumar et al. (2019) argue that the binning underestimates the calibration error and in fact, an accurate estimation is possible only when the predictor π outputs only a discrete set of values. In the multi-class setting, confidence calibration may be insufficient. There may be downstream tasks which require the whole vector of predicted probabilities to be accurate. In the machine learning literature this came into attention only recently (Vaicenavicius et al., 2019). The strongest notion of calibration (Bröcker 2009, reliability Eq. 1), is as follows. Definition 2. A predictor π : X → ∆, is called distribution calibrated if (∀y ∈ Y) P ( Y=y | π(X) ) a.s. = π(X)y. (3) In words: amongst all data points of the input space where the predicted vector of probabilities is π(x) = µ the true observed class labels should be distributed as µ. Respectively, the predictor φ[π](x)y = P ( Y=y | π(X)=π(x) ) (4) is the (optimal) calibration of π: it takes the prediction π(x) and turns it into the true distribution of labels under that initial prediction. This predictor φ[π] is distribution calibrated and Definition 2 can be restated as φ[π](X) a.s.= π(X), i.e. the calibration of π is π itself. Generalizing on ECE, the expected miscalibration of π w.r.t. divergence D : ∆×∆→ R is: E[D(π(X), φ[π](X))], (5) i.e. the average divergence between the predicted distribution and its calibration. It is hard to estimate in practice, because of conditioning on a real vector π(X) = π(x) in the definition of calibration φ[π]. It becomes tricky to verify whether a model is calibrated using only a finite data sample. Different methods have been proposed based on binning of ∆ (Vaicenavicius et al., 2019) or using kernel-based divergences (Widmann et al., 2019). Unfortunately, statistical tests based on unbiased estimates (Widmann et al., 2019) were unable to reject the hypothesis that any basic neural network in a real setting, such as on MNIST data, is already calibrated. No calibration methods were proposed based on this miscalibration. 3.2 CALIBRATION FOR STATISTICAL DECISION MAKING Let us consider the classical statistical decision making problem for a known model p∗. Let D be a finite decision space. Consider the cost matrix l : Y ×D −→ R+ and a decision strategy q : X → D. The risk of the strategy q and the optimal (Bayesian) decision strategy are, respectively: R∗[q] = E [ l(Y, q(X)) ] , q∗(x) = arg min d ∑ y p∗(y|x) [ l(y, d) ] . (6) In practice, we do not have access to the true distribution p∗ to make decisions, only to the model p(y|x). Let f(d, x) = ∑ y p(y|x)l(y, d) denote the model-based conditional risk for observation x. The model-based risk and model-based Bayesian decision strategy are, respectively: R̂[q] = E [ f(d, x) ] , q̂(x) = arg min d f(d, x). (7) If the model p was distribution-calibrated, these two risks would coincide: R̂[q] = R∗[q] for any strategy q and any cost matrix (Zhao et al., 2021). However, as was discussed above, distribution calibration is hard even to measure in practice. This has led to the following definition. Definition 3 (Zhao et al. 2021). For a set of cost matrices L and a set of strategies Q, the predictor π is called (L, Q)-decision calibrated if for all l ∈ L and q ∈ Q the model risk matches the true risk: R̂[q] = R∗[q]. Zhao et al. (2021) show that this definition generalizes previous notions of calibration by specifying the corresponding statistical decision problems. In particular confidence calibration corresponds to recognition with the reject option with a varied cost of rejecting. The distribution calibration can be understood as (L, Q)-decision calibration for all possible loss functions and decision strategies over all possible decision spaces D, which is clearly far too general. It follows from the definition that a decision-calibrated model must accurately estimate the true risk and that the model-based strategy q̂ is the optimum of the true risk over Q. Therefore (L,Q)decision calibration is sufficient for any statistical decision making task with l ∈ L and q ∈ Q. 4 METHOD The condition of (L, Q) decision calibration (Zhao et al., 2021) is still unnecessarily stringent if we have a specific fixed cost matrix l and are interested in the performance of only one particular decision strategy: the model-based Bayesian strategy q̂, (7). Their calibration algorithm is derived under the assumption that L is the set of all cost matrices of bounded norm over a fixed decision space and thus cannot be chose as e.g. L = {l}. We will show that the minimization of the risk of the model-based strategy, R∗[q̂], can improve a precise measure of calibration under a known cost matrix while also obviously not compromising on the task-specific performance metric, which is the risk R∗[q̂] itself. 4.1 CALIBRATION VIA LOSS MINIMIZATION Bröcker (2009) showed that any loss function corresponding to a proper scoring rule satisfies a decomposition into uncertainty, resolution (sharpness) and reliability (miscalibration). A scoring rule S is a function ∆ × Y → R and the expected score, which we call loss for brevity, is L[π] = E[S(π(X), Y )]. For example, the negative log likelihood loss (NLL) corresponds to the scoring rule S(π, y) = − log πy . The decomposition reads L[π] = H(π̄)︸ ︷︷ ︸ uncertainty of Y −E [ D(π̄, φ[π](X)) ]︸ ︷︷ ︸ resolution of π +E [ D(π(X), φ[π](X)) ]︸ ︷︷ ︸ reliability of π , (8) where π̄ is the a priori distribution of labels: π̄y = p∗(y), φ[π] is the calibration of π (4), and H and D are particular entropy and the divergence functions corresponding to the score S. In case of NLL, they are the Shannon entropy and the Kullback–Leibler divergence. Prominently, the reliability term in this decomposition is exactly the expected miscalibration (5) w.r.t. the score-specific divergence D. If we substitute φ[π] as a predictor, we will find out that it has a zero expected miscalibration while the first two terms remain the same: L[φ[π]] = H(π̄)− E [ D(π̄, φ[π](X)) ] ≤ L[π], (9) where the equality uses the fact that φ[φ[π]] = φ[π] and the inequality is due to divergence being always non-negative. Thus φ[π] not only achieves distribution calibration but also is guaranteed not to decrease all losses corresponding to proper scoring rules. This sheds some light on why optimizing NLL is good for calibration as evidenced, e.g., by Guo et al. 2017; Alexandari et al. 2020, in particular improving ECE. Calibration methods often fit a parametric post-processing of a predictor π, such as temperature scaling (Guo et al., 2017). They argue about calibration but optimize NLL. We formally show why this is a perfectly correct idea. Theorem 1. Let π : X → ∆ be a predictor and Tθ : ∆ → ∆ a parametric mapping, invertible for each θ ∈ Θ. Finding an adjusted predictor πθ = Tθ ◦ π minimizing the expected miscalibration is equivalent to minimizing the loss: minθ∈Θ E [ D(πθ, φ[πθ]) ] = minθ∈Θ L[πθ]. (10) Proof. First we show that φ[T ◦ π] is invariant of T for any invertible T . The events T (π(X)) = T (π(x)) and π(X) = π(x) are equal, therefore φ[T ◦ π](x)y = P ( Y=y | T (π(X))=T (π(x)) ) = φ[π](x)y. (11) It follows that D(π̄, φ[T ◦ π](X)) = D(π̄, φ[π](X)). Therefore the first two terms of the decomposition stay the same for any θ. Therefore minimizing the whole loss over θ ∈ Θ is equivalent to minimizing the reliability term alone. This allows to overcome the general difficulty of estimating the expected miscalibration by simply using the empirical estimate of the loss! In particular, no binning of the simplex ∆ is involved. 4.2 DECOMPOSITION OF THE RISK We observe that the true risk of the model-based strategy R∗[q̂] also corresponds to a proper scoring rule and thus can be decomposed according to the theory. Proposition 1. The following scoring rule corresponds to the loss of the model-based decision: S(π, y) = l(y, arg mind ∑ y πyl(y, d)). (12) For two probability distributions π, ρ in ∆, Bröcker (2009) defines the following scoring function s, divergence D and entropy H: s(π, ρ) = ∑ y S(π, y)ρy; D(π, ρ) = s(π, ρ)− s(ρ, ρ); H(ρ) = s(ρ, ρ). (13) In our case, the score s(π(x), p∗(·|x)) is the conditional risk of the prediction q̂(x) and its expectation is the risk of the strategy q̂: E[S(π(X), Y )] = R∗[q̂]. Proposition 2. The score s is proper (Bröcker, 2009), i.e., the “divergence” D is non-negative. Proof. By definition, s(ρ, ρ) = ∑ y S(ρ, y)ρy = ∑ y ρyl(y, arg min d ∑ y ρyl(y, d)) = mind ∑ y ρyl(y, d). (14) Clearly it satisfies s(ρ, ρ) ≤ ∑ y ρyl(y, d̂) for any d̂, in particular d̂ = arg mind ∑ y πyl(y, d). Corollary 1. The decomposition (8) holds for the risk R∗[q̂]. The uncertainty term H(π̄) = mind ∑ y p ∗(y)l(y, d) is just the lowest risk attainable without considering observations. Let us discuss the reliability term. In our case D is not a true divergence as it may vanish even if the two distributions are different. The reliability term is therefore more permissive. This is appropriate, indeed, if e.g., the cost matrix has two identical rows, there is no need to distinguish the respective classes in the prediction and, respectively, no need to have the correct individual predictive probabilities for them. This motivates us to define the task-specific calibration accordingly: Definition 4. Given a cost matrix l and “divergence” Dl defined by (13), a predictor π(X) is ldecision calibrated if Dl ( π(X), φ[π](X) ) a.s. = 0. (15) For any proper divergence, this definition would be equivalent to the distribution calibration in Definition 2. The selectivity of Dl in penalizing differences in the distribution which matter for the decision task is what makes it task-specific. It appears hard to estimate this miscalibration in general as it still involves φ[π]. However, using Theorem 1 we can improve this task-specific calibration in parametric settings (e.g. temperature scaling) by simply minimizing the empirical risk of q̂. 4.3 EMPIRICAL RISK MINIMIZATION Consider a parametric predictor π(x)y = p(y|x; θ) and let f(d, x; θ) = ∑ y p(y|x; θ)l(y, d) as before. Given a sample (xi, yi)Ni=1 from p ∗ the empirical risk minimization for model-based Bayesian strategy reads: minθ 1 N ∑ i l(yi, di) s.t. di = arg min d f(d, xi; θ). (16) This problem is difficult because it is a so called bi-level optimization problem which has discrete decision of the inner problem and a non-linear dependence on θ. Such formulations, where the inner problem corresponds to a general predictor based on solving a combinatorial optimization problem have been studied. Two methods that have been applied to this kind of problems are large margin methods (Tsochantaridis et al., 2005; Taskar et al., 2005) and the direct loss / combinatorial black box minimization (Song et al., 2016; Vlastelica et al., 2019). 4.3.1 DIRECT LOSS AND MARGIN RESCALING The empirical risk can be easily evaluated but cannot be differentiated because of arg min. This arg min over the set of labels can be considered as a small combinatorial solver. We will specialize and analyze direct loss method (Song et al., 2016; Vlastelica et al., 2019) for this case. For simplicity, let us consider a single training sample (x∗, y∗) (with multiple training samples losses and gradients sum up). Let us denote the vector of class probabilities π = p(·|x∗; θ). The estimate of the gradient in π according to the direct loss minimization approach is constructed as follows: d̂ = arg min d f(d, x∗); d̂λ = arg min d [ f(d, x∗)+λl(y∗, d) ] ; ∇̂π := 1λ [l(·, d̂λ)− l(·, d̂)]. (17) Appendix A.1 gives details on how this is obtained from the general method of Vlastelica et al. (2019). The gradient in θ can then be computed by the chain rule. Here d̂ is the solution of the solver (the Bayesian decision) and d̂λ is the decision of a perturbed problem. The strength of the perturbation is controlled by λ. Song et al. (2016) has shown that in the limit λ → 0 the gradient of the expected loss over a continuous data distribution matches E[∇π]. In this limit, stochastic descent with ∇π would directly minimize the (expectation of non-differentiable) loss, which was termed direct loss minimization. However, these arguments are not applicable to a finite training sample. In practice, λ needs to be sufficiently large for ∇π to be non-zero for some data points, at least. In this setting we are not longer minimizing the original loss. However one can define a surrogate loss function such that (17) is its true gradient. We call it the direct loss, so the method can now be validly interpreted as direct loss minimization: L±λ = ± 1 λ ( mind f(d, x ∗)−mind [ f(d, x∗)∓ λl(y∗, d) ]) , (18) where ∓ is paired with ±. Vlastelica et al. (2019) advocate the use of a large λ, define a similar surrogate loss to L−λ and show that it is a lower bound on the empirical loss for positive λ (Observation 3), where the empirical loss is LE = l(y∗, arg mind f(d, x ∗)). Note that there holds L±−λ = L ∓ λ and therefore we can always assume λ > 0 in order to avoid redundancy. We show the following. Proposition 3. Direct loss L−λ is a lower bound on the empirical loss L E and L+λ is an upper bound. The proof is given in Appendix A.1. It follows that the expectation over the training data (resp. true distribution p∗) of L+λ is an upper bound on the empirical risk (resp. true risk). Relation to Margin Rescaling The problem of minimizing empirical risk over discrete strategies of the form arg miny f(y; θ) was also studied in structural prediction (Tsochantaridis et al., 2005; Taskar et al., 2005). One of the most common approaches is called margin re-scaling (Tsochantaridis et al., 2005) and was successfully used in combination with deep networks as well (e.g. Knöbelreiter et al. 2017). Like SVM, it puts a hinge loss on the violation of the classification constraints with the margin proportional to the respective loss. We can show (see Appendix A.2) that the margin re-scaling approach leads to the following surrogate loss: LMRλ = 1 λ ( f(d∗, x∗)−mind [ f(d, x∗)− λl(y∗, d) ]) , (19) where d∗ = arg mind l(y ∗, d) is the best decision given the true class label. Written in this form there is a striking similarity to (18). The only difference being that d∗ is the best decision for a loss (knowing the true label) rather than the best decision based on the model (not knowing the true label). This leads to that margin rescaling is a less tight upper bound. Proposition 4. Margin re-scaling LMRλ coincides with the direct loss L + λ in the region where the classifier makes correct decisions. Furthermore LE ≤ L+λ ≤ LMR+ . The proof is given in Appendix A.2. We believe this connection has not been known before. The two surrogate losses are illustrated in Fig. 1. For both approaches, if λ is small, the size of the margin is small and there is a flat region with zero gradient. As a simple remedy we propose to smooth the minimum in (18) using the smooth minimum function minβ(x) = − 1β log ∑ k e −βxk , where the smoothing degree is controlled by β. 5 EXPERIMENTS In the experiments, we compare different calibration criteria for the same choice of a parametric family. Assuming that networks outputs scores s (or elsewise let sy = log p(y|x)), we consider the following common choices to parametrize the corrected predictor π. TS: Temperature Scaling (Guo et al., 2017): π = softmax(s/T ), where T is a (non-negative) scalar temperature to calibrate. BCTS: Bias-Corrected Temperature Scaling (Alexandari et al., 2020): π = softmax((s + b)/T ), where additionally b is a vector of per-class biases to calibrate. VS: Vector Scaling (Alexandari et al., 2020): π = softmax(s w+ b), where w is a vector of scaling factors, b is a vector of biases and is the coordinate-wise product. We optimize each criterion in the above parameters using Adam optimizer. In order to find hyperparameters (learning rate, λ, β) we use the nested cross-validation procedure detailed in Appendix B. 5.1 FUNGI EDIBILITY (DANISH FUNGI 2020) In this experiment we consider a decision problem of whether to cook (eat) a mushroom given its predicted edibility category, based on the Danish fungi dataset (Picek et al., 2022). In order to compare calibration methods, we create 15 folds of the data that was not used during training into calibration and test parts. Full details can be found in Appendix B. The cost matrix and obtained results are shown in Fig. 2. Calibration with the DirectLoss criterion achieved a lower average test risk in all parametrizations, notably performing well also in the VS parametrization where other criteria performed the worst. The improvement over other methods can be considered statistically significant if one trusts the estimates of the mean and the variance (see below). 5.2 SKIN CANCER LESION TREATMENT (HAM10000) In this experiment, we consider a decision problem of whether to assign a treatment given the lesion classification using skin lesion dataset (Tschandl, 2018). The training is performed on 75% of the data for 100 epochs. From the remaining data we create 100 random splits into calibration (15%) and test (10%). Fig. 3(a) shows that the network significantly underestimates the true risk. After calibration (DirectLoss BCTS), the risk decreases, but the risk gap increases for some data splits. Indeed, with our calibration and optimization criterion being the empirical risk, there is no requirement that this gap should be made small or even decrease. Nevertheless, such increase in the gap is unexpected of a calibration method and might indicate overfitting. Fig. 3 (a,b) show statistics of the differences between pairs: No calibration - DirectLoss and NLL - DirectLoss, confirming that calibration is helpful, but unable to tell whether NLL or DirectLoss is a better calibration objective. Comparisons for TS and VS parametrizations are shown in Figs. B.2 and B.3. 5.3 RARE EXPENSIVE MISTAKES We present a failure mode of calibration on the example of CIRAF10 dataset with the trucks class considered as dangerous (cost of mistake 10000) and other mistakes cost 1. In this setting the decision boundary of q̂ significantly shifts towards classifying nearly all observations as trucks. Instances of trucks for which the model can nevertheless make a mistake become very rare. Depending on whether such an instance falls into the calibration set or into the test set, it may lead to a high cost at the test time. In Fig. 4 for many splits, DirectLoss may be better than NLL in calibration, but in one split it makes an expensive single mistake. Only by chance such case was not observed for NLL. Aslo, this was not observed in the Fungi experiment above (which also has extreme costs) presumably because deadly poisonous mushrooms are rather rare in the dataset. Empirical risk is theoretically backed up by the generalization guarantees such as Hoeffding inequality: P(|R∗(q) − R∗emp(q)| > ε) < 2e−2Nε 2/∆l2 , where N is the number of samples and ∆l is the difference between the maximum and minimum cost. This means that in order to achieve the same confidence we used to have for 0-1 cost, we need to use 108 times more samples. We therefore would like to warn the community from relying on basic statistical evaluation like in our Fungi experiment and would be happy to receive feedback on how to approach the problems associated with high costs, in particular when evaluating calibration methods. 6 CONCLUSION We have given a so-far-missing theoretical justification for post-processing recalibration methods optimizing generic criteria, in particular NLL, showing how they are related to notions of calibration. We then developed a decomposition of the risk of model-based Bayesian decision strategy and derived the respective definition of calibration from it. This approach gives a constructive way to obtain new task-specific definitions of calibration. We then improved the understanding of direct loss and margin rescaling methods for ERM. We believe these results generalize beyond our calibration setup. In the experiments we observed that calibration was important to improve the test risk and that the task-specific calibration, represented by the DirectLoss, can be more efficient (Fungi experiment, high costs). The calibration was also helpful in the lesions experiment (moderate costs), however the increase in the risk gap indicates an overfitting with DirectLoss. Finally, we demonstrated a failure case of DirectLoss and a flaw in the comparison under high costs. ETHICS STATEMENT Please be aware that neural networks can make unpredictable mistakes and produce overconfident estimates. Calibration methods, in particular the proposed one, are not guaranteed to fix these issues. They can improve statistical performance and measures of miscalibration. However, the statistics are random quantities and have to be considered very carefully, especially in the case of high costs, as we show in Section 5.3. The experiments conducted on decision making with fungi or lesion datasets should be considered only as proof of concept. REPRODUCIBILITY STATEMENT Appendix A contains proofs not included in the main paper. Appendix B contains description of datasets and details of training, calibration and testing procedures. Details of implementation can be provided to reviewers confidentially through OpenReview upon request. A PROOFS A.1 DIFFERENTIATION OF BLACKBOX COMBINATORIAL SOLVERS (DIRECT LOSS) We first detail how the general method of Vlastelica et al. (2019) is instantiated for our problem and verify that it is the gradient of the function L−λ we define in (18). A general linear combinatorial solver is formalized in Vlastelica et al. (2019) as: Solver(w) = arg min d wTφ(d), (20) where φ represents discrete choice d as a vector of the same dimension as w. And the direct loss method (Vlastelica et al., 2019, Alg.1) is given by d̂ := Solver(w); (21a) w′ := w + λ dL dφ (d̂); (21b) d̂λ := Solver(w′); (21c) ∇w := − 1 λ [ φ(d̂)− φ(d̂λ) ] . (21d) In our case the solver needs to be d̂ = arg min d f(d, x∗) = arg min d ∑ y p(y|x∗; θ)l(y, d). (22) Let π = p(·|x∗; θ). Two choices for φ qualify: 1. Let φ(d) = one_hot(d) and w ∈ RD with wd = ∑ y πyl(y, d); 2. Let φ(d)y = l(y, d) and w ∈ RK with wk = πk. Both choices lead to equivalent algorithms. We proceed with the second one for convenience as it will define the gradient in π. Our loss is L(d) = l(y∗, d), therefore dLdφy (d̂) = [[y=y ∗]]. The direct loss method specializes as follows: d̂ := arg min d ∑ y πyl(y, d); (23a) π′ := π + λ one_hot(y∗); (23b) d̂λ := arg min d ∑ y π′yl(y, d) = arg min d (∑ y πyl(y, d) + λl(y ∗, d) ) ; (23c) ∇π := − 1 λ [l(·, d̂)− l(·, d̂λ)]. (23d) This is the form we present in (17). Finally, observe that the gradient∇π in (23) matches the gradient of L−λ as defined in (18). Therefore minimizing L − λ is equivalent to the method of Vlastelica et al. (2019). Next we give a very simple proof of the upper / lower bound property of L±λ (it is extendible to the general combinatorial solver case as well). Proposition 3. Direct loss L−λ is a lower bound on the empirical loss L E and L+λ is an upper bound. Proof. We will assume that all losses are non-negative (wlog) and will show the bound property for a given training sample (x∗, y∗). Let f(d) = ∑ y p(y|x∗)l(y, d) and let d̂ = arg mind f(d). Using the inequality min d [∑ y p(y|x∗)l(y, d) + λl(y∗, d) ] ≥ 1 λ f(d̂) + λl(y∗, d̂), (24) in L− all terms cancel except of l(y∗, d̂). Similarly, using the inequality −min d [∑ y p(y|x∗)l(y, d)− λl(y∗, d) ] ≥ −f(d̂) + λl(y∗, d̂) (25) in L+ all terms cancel except of l(y∗, d̂). A.2 DERIVATION OF MARGIN RESCALING The derivation of margin rescaling approach in Tsochantaridis et al. (2005) is somewhat obscure. The reasonable starting point could be given by the SVM-like objective with slacks (but without the quadratic penalty on the weights): 1 λ min ξ,θ ∑ i ξ (26) s.t. (∀d) fi(d∗i ) ≤ fi(d)− λ(l(yi, d)− l(yi, d∗)) + ξi, where fi(d) = ∑ y p(y|xi; θ)l(y, d), (xi, yi) is the i’th training example and d∗i = arg min l(yi, d) is the optimal decision for the training example i. The constraint in this formulation requires that the model loss of the best decision fi(d∗i ) must be strictly less that the loss of any other decision fi(d) with a margin λ(l(yi, d)− l(yi, d∗)), proportional to the loss excess of the respective decision. A violation of this constraint is penalized by a slack ξi and the goal is to minimize the total slack. Notice that the constraint ensures that the slack is non-negative because for d = d∗i all terms except ξi vanish. Solving for optimal ξi in each summand, we obtain that the summand i can be expressed as LMRλ = 1 λ max d (fi(d ∗ i )− fi(d) + λ(l(yi, d)− l(yi, d∗))) (27) = 1 λ ( fi(d ∗ i )−min d (fi(d)− λ(l(yi, d)− l(yi, d∗))) ) . (28) Finally, under the assumption that costs l are non-negative and that l(yi, d∗) = 0 (which can be made without loss of generality), we obtain the formulation (19). Proposition 4. Margin re-scaling LMRλ coincides with the direct loss L + λ in the region where the classifier makes correct decisions. Furthermore LE ≤ L+λ ≤ LMR+ . Proof. The inequality L+λ ≥ 0 is already shown in Proposition 3. The proof is simple, once the two approaches are written in the respective forms that we have shown: L+λ = 1 λ ( min d f(d, x∗)−min d [ f(d, x∗)− λl(y∗, d) ]) , (29a) LMRλ = 1 λ ( f(d∗, x∗)−min d [ f(d, x∗)− λl(y∗, d) ]) . (29b) Let us verify that LMRλ ≥ L + λ . Since the summand −mind [ f(d, x∗) − λl(y∗, d) ] is common in both, the inequality follows trivially from f(d∗, x∗) ≥ min d f(d, x∗). (30) The remaining claim of the proposition is also trivial. If the decision made by classifier is correct, i.e., the optimal one, then (30) holds with equality. B EXPERIMENT DETAILS B.1 CROSS-VALIDATION PROCEDURE Given a subset of data available for calibration (in the current calibration-test split), we create 10 folds for the internal cross-validation. We used stratified folds to maintain the class balance. In each fold we have 9/10 for optimization of calibration parameters and 1/10 for validation of hyperparameters. Hyperparameters corresponding to the best average risk over the 10 folds are selected. We perform selection of the following hyper-parameters: learning rate α for all methods; λ and β for Direct Loss with smooth minimum. The chosen lambda values are then multiplied by 1/κ, where κ is the maximum value of the loss function. This is to normalize the loss function to be invariant to the scale of lambda. The search grids for different methods are shown in Table B.1. B.2 FUNGI EXPERIMENT The trained neural network for mushroom classification (Picek et al., 2022) is adapted to our decision problem (to decide the edibility of the mushrooms) as follows. There is 1604 species in the dataset, out of which we found and annotated the edibility information (6 categories) for 203 species. After this procedure the distribution of species becomes uneven, as shown in Fig. B.1. In particular deadly poisonous mushrooms are relatively rare. We adopted the ResNet-50 network from (Picek et al., 2022) as follows. From the probability vector over spices produced by the model we compute the probability vector over edibility states by marginalization. The accuracy of the model in classifying these 6 states was at 91%. Then we consider a decision problem with 6 states and 2 decisions (accept or not for cooking). We designed a realistic loss function, shown in Fig. 2 top-right. The calibration-test splits were created by using 15 stratified folds of the test set and adding the validation set of the training to the calibration set. For this decision task, we are not longer interested in the accuracy of the classification, but in the expected loss, i.e. the risk shown in Fig. 2 left. B.3 HAM10000 EXPERIMENT We tried to follow the setup of Zhao et al. (2021) in order to allow for an indirect comparison1. In particular we used the same data split and network and tried to evaluate also the gap between the model-estimated (emperical) risk and the true empirical risk. We trained resnet121 model for 100 epochs on 75% of the data. All lesions having multiple views in the dataset were used for training. The remaining 25% consisted of independent instances, each with 1 view only. The training achieved validation accuracy of 90% (the validation set was not used for choosing hyperparameters, only to report this number). The 25% of the data not used for training we split randomly into 15% for calibration and 10% for test. All splits were stratified (preserving class balance). This results in 1A direct comparison is not feasible at the moment: we evaluate only parametric calibration methods; the code and some details of their method are not available to us the test set size of 1015 data points (in each split). Fig. 3 is showing the statistical analysis over 40 splits. As each split requires a calibration (with the nested cross-validation procedure), collecting more statistics is difficult. In our cost matrix we tried to closely replicate the values depicted in Zhao et al. (2021, Fig.1) (motivated by medical domain knowledge) by matching the colors in the image and the color bar. We added a constant in each row to make all losses non-negative. This affects neither the Bayesian decision strategy nor the differences between any two risks. Pairwise comparisons for TS and VS parametrizations, complementing Fig. 3 are shown in Fig. B.3. All kernel density estimates shown are computed with awkde2 (Wang & Wang, 2007) using the default silverman adaptive method. The calibration has a positive effect in these cases as well, however the advantage for VS parametrization appears to be on the side of NLL. B.4 CIFAR-10 EXPERIMENT In this experiment we used CIFAR-10 dataset. The data splitting and calibration protocol were the same as in the fungi experiment. We trained EfficientNetB0 that achieved validation accuracy 94.7%. 2Adaptive Width KDE with Gaussian Kernels https://github.com/mennthor/awkde
1. What is the focus of the paper regarding task-specific calibration? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to prior works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are some missing related works that the authors should consider? 5. What alternative metrics should the authors report to provide a more comprehensive evaluation? 6. Are there any limitations or outdated baselines in the authors' approach that need to be addressed?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors investigate task-specific calibration based on risk-minimisation of a known cost matrix. Strengths And Weaknesses Unfortunately they miss highly relevant related work. Furthermore, the practical usefulness of the proposed approach is highly dependent on the cost matrix and it is not clear how to define it in practice. Third, the authors miss crucial metrics when comparing the results of the approach. Fourth, they only assess outdated baseline calibrator based on TS. Finally, the proposed approach is an incremental advance on the work by Vlastelica et al. Clarity, Quality, Novelty And Reproducibility Missing related work: Related work section is missing recent work [1] that introduces a unifying approach for calibration error and update section 3.1 accordingly. Theorem 1 is not novel and can directly be derived from [2]. In recent literature, authors have optimised the Brier score rather than the NLL (eg in the refs below); this should be used instead of optimising the ECE, which is not a proper score and is irrelevant as objective. Metrics: in addition to the test empirical risk, the authors should also report Brier score, ECE and KCE (the latter from Widmann et al 2019) as alternative metrics that quantify calibration. The baselines investigated by the authors are outdated; at least ETS, DIAG and SPLINES [3,4] should be considered. [1] Gruber, S. and Buettner, F., 2022, Better Uncertainty Calibration via Proper Scores for Classification and Beyond, NeurIPS [2] Zhang, J., Kailkhura, B. and Han, T.Y.J., 2020, November. Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning, ICML [3]Gupta, K., Rahimi, A., Ajanthan, T., Mensink, T., Sminchisescu, C. and Hartley, R., 2020. Calibration of neural networks using splines, ICLR [4] Rahimi, A., Shaban, A., Cheng, C.A., Hartley, R. and Boots, B., 2020. Intra order-preserving functions for calibration of multi-class neural networks, NeurIPS
ICLR
Title Calibration for Decision Making via Empirical Risk Minimization Abstract Neural networks for classification can achieve high accuracy but their probabilistic predictions may be not well-calibrated, in particular overconfident. Different general calibration measures and methods were proposed. But how exactly does the calibration affect downstream tasks? We derive a new task-specific definition of calibration for the problem of statistical decision making with a known cost matrix. We then show that so-defined calibration can be theoretically rigorously improved by minimizing the empirical risk in the adjustment parameters like temperature. For the empirical risk minimization, which is not differentiable, we propose improvements to and analysis of the direct loss minimization approach. Our experiments indicate that task-specific calibration can perform better than a generic one. But we also carefully investigate weaknesses of the proposed tool and issues in the statistical evaluation for problems with highly unbalanced decision costs. 1 INTRODUCTION The notion of calibration originates in forecasting, in particular in meteorology. The following example well explains the concept. Amongst all days when a forecast was made that the chance of rain is 33%, one would expect to find that about a third of them to be rainy and two thirds sunny. If this is the case for all forecasts, the predictor is said to be well-calibrated. We would like the forecaster to be accurate, however if this is not achievable, we would like at least to have it accurately reflect what it does not know, i.e. to be calibrated. The most common model for classification in deep learning consists of a softmax predictive distribution for class labels atop of a deep architecture processing the observation. In view of the excessive number of parameters there is a natural concern whether such models learn accurate predictive probabilities p(y|x). Indeed, neural networks are typically not well calibrated (Guo et al., 2017), in particular they may be over-confident, i.e., being incorrect much more often than their high confidence suggests. Like in the example above, such over-confidence is misleading for interpreting the results. It is also commonly understood that it can mislead any downstream processing relying on the predictive probabilities. There has been therefore a substantial effort to improve the calibration. Unfortunately, even measuring the basic confidence calibration accurately in practice remains challenging (Nixon et al., 2019). In the multi-class setting a vector of class probabilities is output and all of them can be important for the downstream processing. This has led to development of more complex definitions such as distribution-calibration (Vaicenavicius et al., 2019). In this setting, despite the development of new estimators (Vaicenavicius et al., 2019; Widmann et al., 2019), it is practically infeasible to obtain a reliable estimate — there is not enough data to reject the hypothesis that the model is well calibrated. Are we lost then in the attempt to make the predictive probabilities reliable? Not necessarily. Observe that these calibration definitions, both simple and complex ones, are not considering a specific problem downstream that can be hypothetically affected by the poor probabilistic predictions. They try to address all such problems (as well as the purpose of interpretability) at once. We will argue that considering a specific downstream task allows to significantly reduce the complexity of the calibration problem, making it feasible in practice. As a specific downstream task we will consider the Bayesian decision making with a trained NN. It is well-established to train NNs for classification by optimizing the cross-entropy loss. The model architecture and the training pipeline are tuned by researchers to achieve the best generalization w.r.t. classification accuracy. However, this procedure does not take into account different costs of different classification mistakes. To give an example, mistakes in miss-classifying different mushrooms may have different costs for eating: some mistakes are between equally good spices and incur no cost while other mistakes lead to risks of poisoning. Given a finite set of decisions D and a cost matrix l(y, d), one could try to adapt the learned NN model to form the Bayesian decisions strategy q(x) achieving the smallest risk: q(x) = arg min d ∑ y p(y|x)l(y, d). (1) Such adaptation is practically desirable: it would allow one to rely on the established and tuned training approach and reuse the existing models, which may be very costly to retrain from scratch. However, because the model p(y|x) is typically inaccurate in predicting all the class probabilities, this may lead to suboptimal decisions and respectively poor outcomes of such adaptation. In particular, an overconfident model will result in both: making sub-optimal decisions and underestimating their expected cost. One could reasonably hope to learn a more accurate predictor by designing a yet better architecture and using more training data. It may nevertheless stay poorly calibrated and not suitable for the above adaptation. If the strong distribution calibration was possible to achieve as a post-processing step, the adaptation would work perfectly, however this calibration is practically not feasible. On the other hand, any weaker task-unspecific definition of calibration (e.g. class-wise calibration, Vaicenavicius et al. 2019) would not work: there would exist a decision task for which (1) would perform poorly. A formalization of calibration important for (a class of) decision making problems was proposed, only recently, by Zhao et al. (2021). As our main theoretical result, we derive a notion of calibration important for adopting strategy (1) with a given cost matrix. The formalism and tools used for that allow also to better understand the relation between existing calibration methods, in particular empirically successful temperature scaling variants (Guo et al., 2017; Alexandari et al., 2020), and the distribution calibration. Specifically, we show that these methods are guaranteed to improve the expected miscalibration as measured by the corresponding divergence. Calibrating the model w.r.t. the new definition is shown to be equivalent to minimizing the (empirical) risk of strategy (1) in the calibration parameters. As a mean of empirical risk minimization we study the direct loss approach and relate it to margin-rescaling. Experimentally, we show that task-aware calibration, using the direct loss approach, can outperform the generic calibration. However we also observe that for tasks with extremely unbalanced losses, modeling a dangerous class, we lack reliable means to assess the quality of calibration. 2 RELATED WORK We consider all methods that can improve the predictive model p(y|x) when provided some additional calibration data as calibration methods. Typically, the calibration is achieved by a postprocessing of the scores or predictive probabilities such as temperature scaling, bias-corrected temperature scaling or vector scaling. We regard these as different choices of parametrization, i.e., choice of the degrees of freedom to calibrate. Most importantly, existing calibration methods differ in the criterion they optimize. Calibration Unaware of the Task Many calibration techniques, while motivated by the notion and a particular definition of calibration, use a generic criterion unrelated to that notion. A very practical method to calibrate a model turns out to be the likelihood maximization, i.e. relying on the same criterion that is commonly used for training. This is the approach taken in Guo et al. (2017); Alexandari et al. (2020); Kull et al. (2019). Methods optimizing variants of the expected calibration error (ECE), which is a measure of miscalibration, were compared by Nixon et al. (2019), however there is no performance criterion other than ECE itself. Further variants are piece-wise parametric (Kumar et al., 2019) and kernel-based (Kumar et al., 2018) confidence calibration methods. Calibration Aware of the Task The decision-calibration of Zhao et al. (2021) takes the decision problem into the consideration. It will be discussed in detail below. Their calibration method is designed for a given decision space, but cannot make use of a specific cost matrix was it known at the calibration time. Empirical Risk Minimization Methods optimizing the empirical risk (Song et al., 2016; Vlastelica et al., 2019; Taskar et al., 2005) were used for training with complex objectives measuring performance in retrieval, ranking, or structured prediction. They have not been considered for calibration of classification models before. They address the difficulty of non-differentiability of the loss and have a potential to exploit the full information about the cost matrix. 3 BACKGROUND Let X be the space of observations and Y be the set of labels. Assume there is an underlying true joint probability distribution on X ×Y , denoted p∗. Let (X,Y ) be a a pair of random variables with the law p∗. All expectations and probabilities will be meant with respect to (X,Y ). Let ∆ denote the simplex of probabilities over Y . Let p(y|x) be a probabilistic predictor, usually a neural network with softmax output. The predictor can be considered as a mapping π : X → ∆: x 7→ p(y|x). 3.1 GENERAL CALIBRATION First works analyzing calibration in machine learning (Guo et al., 2017) were concerned only with the confidence of the model, i.e. the model’s probability of the class actually predicted. Let ŷ(x) = arg maxk πk(x) be the label predicted by the model and c(X) = maxk πk(X) the respective predictive probability, called confidence. Definition 1. The model is confidence calibrated if P ( Y=ŷ(X) | c(X) ) a.s. = c(X). (2) It requires that amongst all data points for which the prediction has confidence c the expected occurrence of the true label to match c. The respective miscalibration can be measured e.g., by the Expected Calibration Error (ECE) (Degroot & Fienberg, 1983), which is typically estimated by discretizing the probability interval into bins. Substantial efforts were put into calibrating neural networks in this sense (e.g., Naeini et al. 2015; Guo et al. 2017; Nixon et al. 2019). However, Kumar et al. (2019) argue that the binning underestimates the calibration error and in fact, an accurate estimation is possible only when the predictor π outputs only a discrete set of values. In the multi-class setting, confidence calibration may be insufficient. There may be downstream tasks which require the whole vector of predicted probabilities to be accurate. In the machine learning literature this came into attention only recently (Vaicenavicius et al., 2019). The strongest notion of calibration (Bröcker 2009, reliability Eq. 1), is as follows. Definition 2. A predictor π : X → ∆, is called distribution calibrated if (∀y ∈ Y) P ( Y=y | π(X) ) a.s. = π(X)y. (3) In words: amongst all data points of the input space where the predicted vector of probabilities is π(x) = µ the true observed class labels should be distributed as µ. Respectively, the predictor φ[π](x)y = P ( Y=y | π(X)=π(x) ) (4) is the (optimal) calibration of π: it takes the prediction π(x) and turns it into the true distribution of labels under that initial prediction. This predictor φ[π] is distribution calibrated and Definition 2 can be restated as φ[π](X) a.s.= π(X), i.e. the calibration of π is π itself. Generalizing on ECE, the expected miscalibration of π w.r.t. divergence D : ∆×∆→ R is: E[D(π(X), φ[π](X))], (5) i.e. the average divergence between the predicted distribution and its calibration. It is hard to estimate in practice, because of conditioning on a real vector π(X) = π(x) in the definition of calibration φ[π]. It becomes tricky to verify whether a model is calibrated using only a finite data sample. Different methods have been proposed based on binning of ∆ (Vaicenavicius et al., 2019) or using kernel-based divergences (Widmann et al., 2019). Unfortunately, statistical tests based on unbiased estimates (Widmann et al., 2019) were unable to reject the hypothesis that any basic neural network in a real setting, such as on MNIST data, is already calibrated. No calibration methods were proposed based on this miscalibration. 3.2 CALIBRATION FOR STATISTICAL DECISION MAKING Let us consider the classical statistical decision making problem for a known model p∗. Let D be a finite decision space. Consider the cost matrix l : Y ×D −→ R+ and a decision strategy q : X → D. The risk of the strategy q and the optimal (Bayesian) decision strategy are, respectively: R∗[q] = E [ l(Y, q(X)) ] , q∗(x) = arg min d ∑ y p∗(y|x) [ l(y, d) ] . (6) In practice, we do not have access to the true distribution p∗ to make decisions, only to the model p(y|x). Let f(d, x) = ∑ y p(y|x)l(y, d) denote the model-based conditional risk for observation x. The model-based risk and model-based Bayesian decision strategy are, respectively: R̂[q] = E [ f(d, x) ] , q̂(x) = arg min d f(d, x). (7) If the model p was distribution-calibrated, these two risks would coincide: R̂[q] = R∗[q] for any strategy q and any cost matrix (Zhao et al., 2021). However, as was discussed above, distribution calibration is hard even to measure in practice. This has led to the following definition. Definition 3 (Zhao et al. 2021). For a set of cost matrices L and a set of strategies Q, the predictor π is called (L, Q)-decision calibrated if for all l ∈ L and q ∈ Q the model risk matches the true risk: R̂[q] = R∗[q]. Zhao et al. (2021) show that this definition generalizes previous notions of calibration by specifying the corresponding statistical decision problems. In particular confidence calibration corresponds to recognition with the reject option with a varied cost of rejecting. The distribution calibration can be understood as (L, Q)-decision calibration for all possible loss functions and decision strategies over all possible decision spaces D, which is clearly far too general. It follows from the definition that a decision-calibrated model must accurately estimate the true risk and that the model-based strategy q̂ is the optimum of the true risk over Q. Therefore (L,Q)decision calibration is sufficient for any statistical decision making task with l ∈ L and q ∈ Q. 4 METHOD The condition of (L, Q) decision calibration (Zhao et al., 2021) is still unnecessarily stringent if we have a specific fixed cost matrix l and are interested in the performance of only one particular decision strategy: the model-based Bayesian strategy q̂, (7). Their calibration algorithm is derived under the assumption that L is the set of all cost matrices of bounded norm over a fixed decision space and thus cannot be chose as e.g. L = {l}. We will show that the minimization of the risk of the model-based strategy, R∗[q̂], can improve a precise measure of calibration under a known cost matrix while also obviously not compromising on the task-specific performance metric, which is the risk R∗[q̂] itself. 4.1 CALIBRATION VIA LOSS MINIMIZATION Bröcker (2009) showed that any loss function corresponding to a proper scoring rule satisfies a decomposition into uncertainty, resolution (sharpness) and reliability (miscalibration). A scoring rule S is a function ∆ × Y → R and the expected score, which we call loss for brevity, is L[π] = E[S(π(X), Y )]. For example, the negative log likelihood loss (NLL) corresponds to the scoring rule S(π, y) = − log πy . The decomposition reads L[π] = H(π̄)︸ ︷︷ ︸ uncertainty of Y −E [ D(π̄, φ[π](X)) ]︸ ︷︷ ︸ resolution of π +E [ D(π(X), φ[π](X)) ]︸ ︷︷ ︸ reliability of π , (8) where π̄ is the a priori distribution of labels: π̄y = p∗(y), φ[π] is the calibration of π (4), and H and D are particular entropy and the divergence functions corresponding to the score S. In case of NLL, they are the Shannon entropy and the Kullback–Leibler divergence. Prominently, the reliability term in this decomposition is exactly the expected miscalibration (5) w.r.t. the score-specific divergence D. If we substitute φ[π] as a predictor, we will find out that it has a zero expected miscalibration while the first two terms remain the same: L[φ[π]] = H(π̄)− E [ D(π̄, φ[π](X)) ] ≤ L[π], (9) where the equality uses the fact that φ[φ[π]] = φ[π] and the inequality is due to divergence being always non-negative. Thus φ[π] not only achieves distribution calibration but also is guaranteed not to decrease all losses corresponding to proper scoring rules. This sheds some light on why optimizing NLL is good for calibration as evidenced, e.g., by Guo et al. 2017; Alexandari et al. 2020, in particular improving ECE. Calibration methods often fit a parametric post-processing of a predictor π, such as temperature scaling (Guo et al., 2017). They argue about calibration but optimize NLL. We formally show why this is a perfectly correct idea. Theorem 1. Let π : X → ∆ be a predictor and Tθ : ∆ → ∆ a parametric mapping, invertible for each θ ∈ Θ. Finding an adjusted predictor πθ = Tθ ◦ π minimizing the expected miscalibration is equivalent to minimizing the loss: minθ∈Θ E [ D(πθ, φ[πθ]) ] = minθ∈Θ L[πθ]. (10) Proof. First we show that φ[T ◦ π] is invariant of T for any invertible T . The events T (π(X)) = T (π(x)) and π(X) = π(x) are equal, therefore φ[T ◦ π](x)y = P ( Y=y | T (π(X))=T (π(x)) ) = φ[π](x)y. (11) It follows that D(π̄, φ[T ◦ π](X)) = D(π̄, φ[π](X)). Therefore the first two terms of the decomposition stay the same for any θ. Therefore minimizing the whole loss over θ ∈ Θ is equivalent to minimizing the reliability term alone. This allows to overcome the general difficulty of estimating the expected miscalibration by simply using the empirical estimate of the loss! In particular, no binning of the simplex ∆ is involved. 4.2 DECOMPOSITION OF THE RISK We observe that the true risk of the model-based strategy R∗[q̂] also corresponds to a proper scoring rule and thus can be decomposed according to the theory. Proposition 1. The following scoring rule corresponds to the loss of the model-based decision: S(π, y) = l(y, arg mind ∑ y πyl(y, d)). (12) For two probability distributions π, ρ in ∆, Bröcker (2009) defines the following scoring function s, divergence D and entropy H: s(π, ρ) = ∑ y S(π, y)ρy; D(π, ρ) = s(π, ρ)− s(ρ, ρ); H(ρ) = s(ρ, ρ). (13) In our case, the score s(π(x), p∗(·|x)) is the conditional risk of the prediction q̂(x) and its expectation is the risk of the strategy q̂: E[S(π(X), Y )] = R∗[q̂]. Proposition 2. The score s is proper (Bröcker, 2009), i.e., the “divergence” D is non-negative. Proof. By definition, s(ρ, ρ) = ∑ y S(ρ, y)ρy = ∑ y ρyl(y, arg min d ∑ y ρyl(y, d)) = mind ∑ y ρyl(y, d). (14) Clearly it satisfies s(ρ, ρ) ≤ ∑ y ρyl(y, d̂) for any d̂, in particular d̂ = arg mind ∑ y πyl(y, d). Corollary 1. The decomposition (8) holds for the risk R∗[q̂]. The uncertainty term H(π̄) = mind ∑ y p ∗(y)l(y, d) is just the lowest risk attainable without considering observations. Let us discuss the reliability term. In our case D is not a true divergence as it may vanish even if the two distributions are different. The reliability term is therefore more permissive. This is appropriate, indeed, if e.g., the cost matrix has two identical rows, there is no need to distinguish the respective classes in the prediction and, respectively, no need to have the correct individual predictive probabilities for them. This motivates us to define the task-specific calibration accordingly: Definition 4. Given a cost matrix l and “divergence” Dl defined by (13), a predictor π(X) is ldecision calibrated if Dl ( π(X), φ[π](X) ) a.s. = 0. (15) For any proper divergence, this definition would be equivalent to the distribution calibration in Definition 2. The selectivity of Dl in penalizing differences in the distribution which matter for the decision task is what makes it task-specific. It appears hard to estimate this miscalibration in general as it still involves φ[π]. However, using Theorem 1 we can improve this task-specific calibration in parametric settings (e.g. temperature scaling) by simply minimizing the empirical risk of q̂. 4.3 EMPIRICAL RISK MINIMIZATION Consider a parametric predictor π(x)y = p(y|x; θ) and let f(d, x; θ) = ∑ y p(y|x; θ)l(y, d) as before. Given a sample (xi, yi)Ni=1 from p ∗ the empirical risk minimization for model-based Bayesian strategy reads: minθ 1 N ∑ i l(yi, di) s.t. di = arg min d f(d, xi; θ). (16) This problem is difficult because it is a so called bi-level optimization problem which has discrete decision of the inner problem and a non-linear dependence on θ. Such formulations, where the inner problem corresponds to a general predictor based on solving a combinatorial optimization problem have been studied. Two methods that have been applied to this kind of problems are large margin methods (Tsochantaridis et al., 2005; Taskar et al., 2005) and the direct loss / combinatorial black box minimization (Song et al., 2016; Vlastelica et al., 2019). 4.3.1 DIRECT LOSS AND MARGIN RESCALING The empirical risk can be easily evaluated but cannot be differentiated because of arg min. This arg min over the set of labels can be considered as a small combinatorial solver. We will specialize and analyze direct loss method (Song et al., 2016; Vlastelica et al., 2019) for this case. For simplicity, let us consider a single training sample (x∗, y∗) (with multiple training samples losses and gradients sum up). Let us denote the vector of class probabilities π = p(·|x∗; θ). The estimate of the gradient in π according to the direct loss minimization approach is constructed as follows: d̂ = arg min d f(d, x∗); d̂λ = arg min d [ f(d, x∗)+λl(y∗, d) ] ; ∇̂π := 1λ [l(·, d̂λ)− l(·, d̂)]. (17) Appendix A.1 gives details on how this is obtained from the general method of Vlastelica et al. (2019). The gradient in θ can then be computed by the chain rule. Here d̂ is the solution of the solver (the Bayesian decision) and d̂λ is the decision of a perturbed problem. The strength of the perturbation is controlled by λ. Song et al. (2016) has shown that in the limit λ → 0 the gradient of the expected loss over a continuous data distribution matches E[∇π]. In this limit, stochastic descent with ∇π would directly minimize the (expectation of non-differentiable) loss, which was termed direct loss minimization. However, these arguments are not applicable to a finite training sample. In practice, λ needs to be sufficiently large for ∇π to be non-zero for some data points, at least. In this setting we are not longer minimizing the original loss. However one can define a surrogate loss function such that (17) is its true gradient. We call it the direct loss, so the method can now be validly interpreted as direct loss minimization: L±λ = ± 1 λ ( mind f(d, x ∗)−mind [ f(d, x∗)∓ λl(y∗, d) ]) , (18) where ∓ is paired with ±. Vlastelica et al. (2019) advocate the use of a large λ, define a similar surrogate loss to L−λ and show that it is a lower bound on the empirical loss for positive λ (Observation 3), where the empirical loss is LE = l(y∗, arg mind f(d, x ∗)). Note that there holds L±−λ = L ∓ λ and therefore we can always assume λ > 0 in order to avoid redundancy. We show the following. Proposition 3. Direct loss L−λ is a lower bound on the empirical loss L E and L+λ is an upper bound. The proof is given in Appendix A.1. It follows that the expectation over the training data (resp. true distribution p∗) of L+λ is an upper bound on the empirical risk (resp. true risk). Relation to Margin Rescaling The problem of minimizing empirical risk over discrete strategies of the form arg miny f(y; θ) was also studied in structural prediction (Tsochantaridis et al., 2005; Taskar et al., 2005). One of the most common approaches is called margin re-scaling (Tsochantaridis et al., 2005) and was successfully used in combination with deep networks as well (e.g. Knöbelreiter et al. 2017). Like SVM, it puts a hinge loss on the violation of the classification constraints with the margin proportional to the respective loss. We can show (see Appendix A.2) that the margin re-scaling approach leads to the following surrogate loss: LMRλ = 1 λ ( f(d∗, x∗)−mind [ f(d, x∗)− λl(y∗, d) ]) , (19) where d∗ = arg mind l(y ∗, d) is the best decision given the true class label. Written in this form there is a striking similarity to (18). The only difference being that d∗ is the best decision for a loss (knowing the true label) rather than the best decision based on the model (not knowing the true label). This leads to that margin rescaling is a less tight upper bound. Proposition 4. Margin re-scaling LMRλ coincides with the direct loss L + λ in the region where the classifier makes correct decisions. Furthermore LE ≤ L+λ ≤ LMR+ . The proof is given in Appendix A.2. We believe this connection has not been known before. The two surrogate losses are illustrated in Fig. 1. For both approaches, if λ is small, the size of the margin is small and there is a flat region with zero gradient. As a simple remedy we propose to smooth the minimum in (18) using the smooth minimum function minβ(x) = − 1β log ∑ k e −βxk , where the smoothing degree is controlled by β. 5 EXPERIMENTS In the experiments, we compare different calibration criteria for the same choice of a parametric family. Assuming that networks outputs scores s (or elsewise let sy = log p(y|x)), we consider the following common choices to parametrize the corrected predictor π. TS: Temperature Scaling (Guo et al., 2017): π = softmax(s/T ), where T is a (non-negative) scalar temperature to calibrate. BCTS: Bias-Corrected Temperature Scaling (Alexandari et al., 2020): π = softmax((s + b)/T ), where additionally b is a vector of per-class biases to calibrate. VS: Vector Scaling (Alexandari et al., 2020): π = softmax(s w+ b), where w is a vector of scaling factors, b is a vector of biases and is the coordinate-wise product. We optimize each criterion in the above parameters using Adam optimizer. In order to find hyperparameters (learning rate, λ, β) we use the nested cross-validation procedure detailed in Appendix B. 5.1 FUNGI EDIBILITY (DANISH FUNGI 2020) In this experiment we consider a decision problem of whether to cook (eat) a mushroom given its predicted edibility category, based on the Danish fungi dataset (Picek et al., 2022). In order to compare calibration methods, we create 15 folds of the data that was not used during training into calibration and test parts. Full details can be found in Appendix B. The cost matrix and obtained results are shown in Fig. 2. Calibration with the DirectLoss criterion achieved a lower average test risk in all parametrizations, notably performing well also in the VS parametrization where other criteria performed the worst. The improvement over other methods can be considered statistically significant if one trusts the estimates of the mean and the variance (see below). 5.2 SKIN CANCER LESION TREATMENT (HAM10000) In this experiment, we consider a decision problem of whether to assign a treatment given the lesion classification using skin lesion dataset (Tschandl, 2018). The training is performed on 75% of the data for 100 epochs. From the remaining data we create 100 random splits into calibration (15%) and test (10%). Fig. 3(a) shows that the network significantly underestimates the true risk. After calibration (DirectLoss BCTS), the risk decreases, but the risk gap increases for some data splits. Indeed, with our calibration and optimization criterion being the empirical risk, there is no requirement that this gap should be made small or even decrease. Nevertheless, such increase in the gap is unexpected of a calibration method and might indicate overfitting. Fig. 3 (a,b) show statistics of the differences between pairs: No calibration - DirectLoss and NLL - DirectLoss, confirming that calibration is helpful, but unable to tell whether NLL or DirectLoss is a better calibration objective. Comparisons for TS and VS parametrizations are shown in Figs. B.2 and B.3. 5.3 RARE EXPENSIVE MISTAKES We present a failure mode of calibration on the example of CIRAF10 dataset with the trucks class considered as dangerous (cost of mistake 10000) and other mistakes cost 1. In this setting the decision boundary of q̂ significantly shifts towards classifying nearly all observations as trucks. Instances of trucks for which the model can nevertheless make a mistake become very rare. Depending on whether such an instance falls into the calibration set or into the test set, it may lead to a high cost at the test time. In Fig. 4 for many splits, DirectLoss may be better than NLL in calibration, but in one split it makes an expensive single mistake. Only by chance such case was not observed for NLL. Aslo, this was not observed in the Fungi experiment above (which also has extreme costs) presumably because deadly poisonous mushrooms are rather rare in the dataset. Empirical risk is theoretically backed up by the generalization guarantees such as Hoeffding inequality: P(|R∗(q) − R∗emp(q)| > ε) < 2e−2Nε 2/∆l2 , where N is the number of samples and ∆l is the difference between the maximum and minimum cost. This means that in order to achieve the same confidence we used to have for 0-1 cost, we need to use 108 times more samples. We therefore would like to warn the community from relying on basic statistical evaluation like in our Fungi experiment and would be happy to receive feedback on how to approach the problems associated with high costs, in particular when evaluating calibration methods. 6 CONCLUSION We have given a so-far-missing theoretical justification for post-processing recalibration methods optimizing generic criteria, in particular NLL, showing how they are related to notions of calibration. We then developed a decomposition of the risk of model-based Bayesian decision strategy and derived the respective definition of calibration from it. This approach gives a constructive way to obtain new task-specific definitions of calibration. We then improved the understanding of direct loss and margin rescaling methods for ERM. We believe these results generalize beyond our calibration setup. In the experiments we observed that calibration was important to improve the test risk and that the task-specific calibration, represented by the DirectLoss, can be more efficient (Fungi experiment, high costs). The calibration was also helpful in the lesions experiment (moderate costs), however the increase in the risk gap indicates an overfitting with DirectLoss. Finally, we demonstrated a failure case of DirectLoss and a flaw in the comparison under high costs. ETHICS STATEMENT Please be aware that neural networks can make unpredictable mistakes and produce overconfident estimates. Calibration methods, in particular the proposed one, are not guaranteed to fix these issues. They can improve statistical performance and measures of miscalibration. However, the statistics are random quantities and have to be considered very carefully, especially in the case of high costs, as we show in Section 5.3. The experiments conducted on decision making with fungi or lesion datasets should be considered only as proof of concept. REPRODUCIBILITY STATEMENT Appendix A contains proofs not included in the main paper. Appendix B contains description of datasets and details of training, calibration and testing procedures. Details of implementation can be provided to reviewers confidentially through OpenReview upon request. A PROOFS A.1 DIFFERENTIATION OF BLACKBOX COMBINATORIAL SOLVERS (DIRECT LOSS) We first detail how the general method of Vlastelica et al. (2019) is instantiated for our problem and verify that it is the gradient of the function L−λ we define in (18). A general linear combinatorial solver is formalized in Vlastelica et al. (2019) as: Solver(w) = arg min d wTφ(d), (20) where φ represents discrete choice d as a vector of the same dimension as w. And the direct loss method (Vlastelica et al., 2019, Alg.1) is given by d̂ := Solver(w); (21a) w′ := w + λ dL dφ (d̂); (21b) d̂λ := Solver(w′); (21c) ∇w := − 1 λ [ φ(d̂)− φ(d̂λ) ] . (21d) In our case the solver needs to be d̂ = arg min d f(d, x∗) = arg min d ∑ y p(y|x∗; θ)l(y, d). (22) Let π = p(·|x∗; θ). Two choices for φ qualify: 1. Let φ(d) = one_hot(d) and w ∈ RD with wd = ∑ y πyl(y, d); 2. Let φ(d)y = l(y, d) and w ∈ RK with wk = πk. Both choices lead to equivalent algorithms. We proceed with the second one for convenience as it will define the gradient in π. Our loss is L(d) = l(y∗, d), therefore dLdφy (d̂) = [[y=y ∗]]. The direct loss method specializes as follows: d̂ := arg min d ∑ y πyl(y, d); (23a) π′ := π + λ one_hot(y∗); (23b) d̂λ := arg min d ∑ y π′yl(y, d) = arg min d (∑ y πyl(y, d) + λl(y ∗, d) ) ; (23c) ∇π := − 1 λ [l(·, d̂)− l(·, d̂λ)]. (23d) This is the form we present in (17). Finally, observe that the gradient∇π in (23) matches the gradient of L−λ as defined in (18). Therefore minimizing L − λ is equivalent to the method of Vlastelica et al. (2019). Next we give a very simple proof of the upper / lower bound property of L±λ (it is extendible to the general combinatorial solver case as well). Proposition 3. Direct loss L−λ is a lower bound on the empirical loss L E and L+λ is an upper bound. Proof. We will assume that all losses are non-negative (wlog) and will show the bound property for a given training sample (x∗, y∗). Let f(d) = ∑ y p(y|x∗)l(y, d) and let d̂ = arg mind f(d). Using the inequality min d [∑ y p(y|x∗)l(y, d) + λl(y∗, d) ] ≥ 1 λ f(d̂) + λl(y∗, d̂), (24) in L− all terms cancel except of l(y∗, d̂). Similarly, using the inequality −min d [∑ y p(y|x∗)l(y, d)− λl(y∗, d) ] ≥ −f(d̂) + λl(y∗, d̂) (25) in L+ all terms cancel except of l(y∗, d̂). A.2 DERIVATION OF MARGIN RESCALING The derivation of margin rescaling approach in Tsochantaridis et al. (2005) is somewhat obscure. The reasonable starting point could be given by the SVM-like objective with slacks (but without the quadratic penalty on the weights): 1 λ min ξ,θ ∑ i ξ (26) s.t. (∀d) fi(d∗i ) ≤ fi(d)− λ(l(yi, d)− l(yi, d∗)) + ξi, where fi(d) = ∑ y p(y|xi; θ)l(y, d), (xi, yi) is the i’th training example and d∗i = arg min l(yi, d) is the optimal decision for the training example i. The constraint in this formulation requires that the model loss of the best decision fi(d∗i ) must be strictly less that the loss of any other decision fi(d) with a margin λ(l(yi, d)− l(yi, d∗)), proportional to the loss excess of the respective decision. A violation of this constraint is penalized by a slack ξi and the goal is to minimize the total slack. Notice that the constraint ensures that the slack is non-negative because for d = d∗i all terms except ξi vanish. Solving for optimal ξi in each summand, we obtain that the summand i can be expressed as LMRλ = 1 λ max d (fi(d ∗ i )− fi(d) + λ(l(yi, d)− l(yi, d∗))) (27) = 1 λ ( fi(d ∗ i )−min d (fi(d)− λ(l(yi, d)− l(yi, d∗))) ) . (28) Finally, under the assumption that costs l are non-negative and that l(yi, d∗) = 0 (which can be made without loss of generality), we obtain the formulation (19). Proposition 4. Margin re-scaling LMRλ coincides with the direct loss L + λ in the region where the classifier makes correct decisions. Furthermore LE ≤ L+λ ≤ LMR+ . Proof. The inequality L+λ ≥ 0 is already shown in Proposition 3. The proof is simple, once the two approaches are written in the respective forms that we have shown: L+λ = 1 λ ( min d f(d, x∗)−min d [ f(d, x∗)− λl(y∗, d) ]) , (29a) LMRλ = 1 λ ( f(d∗, x∗)−min d [ f(d, x∗)− λl(y∗, d) ]) . (29b) Let us verify that LMRλ ≥ L + λ . Since the summand −mind [ f(d, x∗) − λl(y∗, d) ] is common in both, the inequality follows trivially from f(d∗, x∗) ≥ min d f(d, x∗). (30) The remaining claim of the proposition is also trivial. If the decision made by classifier is correct, i.e., the optimal one, then (30) holds with equality. B EXPERIMENT DETAILS B.1 CROSS-VALIDATION PROCEDURE Given a subset of data available for calibration (in the current calibration-test split), we create 10 folds for the internal cross-validation. We used stratified folds to maintain the class balance. In each fold we have 9/10 for optimization of calibration parameters and 1/10 for validation of hyperparameters. Hyperparameters corresponding to the best average risk over the 10 folds are selected. We perform selection of the following hyper-parameters: learning rate α for all methods; λ and β for Direct Loss with smooth minimum. The chosen lambda values are then multiplied by 1/κ, where κ is the maximum value of the loss function. This is to normalize the loss function to be invariant to the scale of lambda. The search grids for different methods are shown in Table B.1. B.2 FUNGI EXPERIMENT The trained neural network for mushroom classification (Picek et al., 2022) is adapted to our decision problem (to decide the edibility of the mushrooms) as follows. There is 1604 species in the dataset, out of which we found and annotated the edibility information (6 categories) for 203 species. After this procedure the distribution of species becomes uneven, as shown in Fig. B.1. In particular deadly poisonous mushrooms are relatively rare. We adopted the ResNet-50 network from (Picek et al., 2022) as follows. From the probability vector over spices produced by the model we compute the probability vector over edibility states by marginalization. The accuracy of the model in classifying these 6 states was at 91%. Then we consider a decision problem with 6 states and 2 decisions (accept or not for cooking). We designed a realistic loss function, shown in Fig. 2 top-right. The calibration-test splits were created by using 15 stratified folds of the test set and adding the validation set of the training to the calibration set. For this decision task, we are not longer interested in the accuracy of the classification, but in the expected loss, i.e. the risk shown in Fig. 2 left. B.3 HAM10000 EXPERIMENT We tried to follow the setup of Zhao et al. (2021) in order to allow for an indirect comparison1. In particular we used the same data split and network and tried to evaluate also the gap between the model-estimated (emperical) risk and the true empirical risk. We trained resnet121 model for 100 epochs on 75% of the data. All lesions having multiple views in the dataset were used for training. The remaining 25% consisted of independent instances, each with 1 view only. The training achieved validation accuracy of 90% (the validation set was not used for choosing hyperparameters, only to report this number). The 25% of the data not used for training we split randomly into 15% for calibration and 10% for test. All splits were stratified (preserving class balance). This results in 1A direct comparison is not feasible at the moment: we evaluate only parametric calibration methods; the code and some details of their method are not available to us the test set size of 1015 data points (in each split). Fig. 3 is showing the statistical analysis over 40 splits. As each split requires a calibration (with the nested cross-validation procedure), collecting more statistics is difficult. In our cost matrix we tried to closely replicate the values depicted in Zhao et al. (2021, Fig.1) (motivated by medical domain knowledge) by matching the colors in the image and the color bar. We added a constant in each row to make all losses non-negative. This affects neither the Bayesian decision strategy nor the differences between any two risks. Pairwise comparisons for TS and VS parametrizations, complementing Fig. 3 are shown in Fig. B.3. All kernel density estimates shown are computed with awkde2 (Wang & Wang, 2007) using the default silverman adaptive method. The calibration has a positive effect in these cases as well, however the advantage for VS parametrization appears to be on the side of NLL. B.4 CIFAR-10 EXPERIMENT In this experiment we used CIFAR-10 dataset. The data splitting and calibration protocol were the same as in the fungi experiment. We trained EfficientNetB0 that achieved validation accuracy 94.7%. 2Adaptive Width KDE with Gaussian Kernels https://github.com/mennthor/awkde
1. What is the focus and contribution of the paper regarding semantic correspondence? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of neural representation? 3. Do you have any concerns or limitations regarding the NeMF approach for semantic correspondence? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What is the main contribution of the paper on dictionary learning, and what are the significant theoretical techniques described in the paper? 6. What are the strengths and weaknesses of the paper, especially in the experimental section? 7. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 8. What is the focus and contribution of the paper regarding cost-sensitive decision-making in multi-class classification? 9. What are the strengths and weaknesses of the proposed method, particularly in terms of theoretical background and practical benefits? 10. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper considers the task of cost-sensitive decision-making in multi-class classification with a fixed cost matrix providing the costs of different actions (decisions) depending on the actual class of the instance. Following the terminology proposed by (Zhao et al 2021), optimal decisions can be made if the probabilistic classifier is decision-calibrated for this cost matrix when using the model-based Bayesian strategy for making decisions. Next, the paper revisits the proper scoring rules to emphasize the well-known but in the particular formulation possibly not previously explicitly proved fact that optimizing proper losses results in calibrated probabilities (under certain assumptions). The paper then turns to minimizing the expected cost. This is a non-differentiable measure, and the idea is to use the techniques of (Song et al 2016), proposing the main method of the paper called direct loss minimization. The experiments on 3 cost-sensitive multi-class classification tasks demonstrate that when post-hoc calibration is performed by optimizing direct loss then the cost objective is improved compared to standard optimization of NLL (or ECE). Yang Song, Alexander Schwing, Raquel Urtasun, et al. Training deep neural networks via direct loss minimization. In ICML: International Conference on Machine Learning, pp. 2169–2177, 2016. Shengjia Zhao, Michael Kim, Roshni Sahoo, Tengyu Ma, and Stefano Ermon. Calibrating predictions to decisions: A novel approach to multi-class calibration. In NeurIPS: Advances in Neural Information Processing Systems, volume 34, pp. 22313–22324, 2021. Strengths And Weaknesses Strengths: The literature has mostly been covered in sufficient detail; Theoretical background to the proposed method has been covered well; Experiments show practical benefits from the method. Weaknesses: While the experiments show benefits of direct loss minimization compared to NLL or ECE minimization for TS, BCTS, and VS, it is not clear whether it is also an improvement over the state-of-the-art. For example, is it possible that some newer post-hoc calibration methods for multi-class classification would be even better? Some of the newer methods optimize for NLL and then the authors could argue that changing from NLL optimization to direct loss minimization might bring benefits there as well, and I agree. However, the decision calibration algorithm proposed by (Zhao et al 2021) is not optimizing NLL and it is not clear whether direct loss minimization of TS, BCTS, or VS would be stronger than that. Therefore, in my opinion, the paper should definitely include the decision calibration algorithm by (Zhao et al 2021) in the experiments for all considered datasets (such as presented in Figure 2 for the Fungi dataset). Figure 2 nicely reports test empirical risk improvements as a table for the Fungi dataset, but I would have expected similar tables for the other considered datasets (e.g. in the appendices). Currently it is not clear how the numbers compare for other datasets. From the text one could read that there was an expensive mistake case in the experiments with the CIFAR10 dataset. However, the numbers should still be presented (e.g. together with a note about this mistake in the caption of the respective table). It is great that the rare expensive mistakes have been discussed, but it also highlights a potential problem with the proposed method, that the probabilities near the extremes (0 and 1) might not be sufficiently well calibrated (because the expensive class is predicted only when its probability is near 1 and other class probabilities are near 0). It is true that much larger datasets would be needed to fully evaluate calibration near the extremes, but at least the paper could study how the calibration maps obtained from direct loss minimization are different from calibration maps obtained from NLL minimization. Is the temperature for temperature scaling higher or lower in the experiments when changing to direct loss minimization? How are the class parameters changed in vector scaling? There are shortcomings in clarity. E.g., the proposed method has not been written out as a separate algorithm, and thus it is quite hard to extract from the text of the paper. The name has been given to the method near Eq.(18) but the gradients are actually given at Eq.(17). In (17) it is not at first sight obvious how it depends on parameters Θ so that it could be used for optimizing Θ . The dependency is actually through f ( d , x ∗ ) but the definition of f was given a lot earlier and only as an inline formula. It is not sufficiently clear why Sections 4.1 and 4.2 are eventually needed for the development of the proposed method. The paper states that "However, using Theorem 1 we can improve this task-specific calibration in parametric settings (e.g. temperature scaling) by simply minimizing the empirical risk of qˆ." In my view, it is obvious in the cost-sensitive classification task that cost minimization (e.g. via empirical risk minimization) is the primary objective and calibration is a secondary objective. Thus, it is a bit odd to justify the use of the primary objective through the secondary one, rather than the other way around. There seems to be an error in the definition of R ^ [ q ] . Shouldn't it be R ^ [ q ] = E [ min d f ( d , X ) ] ? In the current form it would otherwise not be clear what the value of d would be and also there are no random quantities involved currently. A relevant paper that has not been discussed is by Santos-Rodríguez et al 2009: Santos-Rodríguez, R., Guerrero-Curieses, A., Alaiz-Rodríguez, R. and Cid-Sueiro, J., 2009. Cost-sensitive learning based on Bregman divergences. Machine Learning, 76(2), pp.271-285. Clarity, Quality, Novelty And Reproducibility As written above, there are some issues with clarity. The overall quality of the paper is good and the results are original but see the above listed weaknesses.
ICLR
Title Latent Programmer: Discrete Latent Codes for Program Synthesis Abstract In many sequence learning tasks, such as program synthesis and document summarization, a key problem is searching over a large space of possible output sequences. We propose to learn representations of the outputs that are specifically meant for search: rich enough to specify the desired output but compact enough to make search more efficient. Discrete latent codes are appealing for this purpose, as they naturally allow sophisticated combinatorial search strategies. The latent codes are learned using a self-supervised learning principle, in which first a discrete autoencoder is trained on the output sequences, and then the resulting latent codes are used as intermediate targets for the end-to-end sequence prediction task. Based on these insights, we introduce the Latent Programmer, a program synthesis method that first predicts a discrete latent code from input/output examples, and then generates the program in the target language. We evaluate the Latent Programmer on two domains: synthesis of string transformation programs, and generation of programs from natural language descriptions. We demonstrate that the discrete latent representation significantly improves synthesis accuracy. 1 INTRODUCTION Our focus in this paper is program synthesis, one of the longstanding grand challenges of artificial intelligence research (Manna & Waldinger, 1971; Summers, 1977). The objective of program synthesis is to automatically write a program given a specification of its intended behavior, such as a natural language description or a small set of input-output examples. Search is an especially difficult challenge within program synthesis (Alur et al., 2013; Gulwani et al., 2017), and many different methods have been explored, including top-down search (Lee et al., 2018), bottom up search (Udupa et al., 2013), beam search (Devlin et al., 2017), and many others (see Section 2). We take a different philosophy: Can we learn a representation of programs specifically to help search? A natural way of representing a program is as a sequence of source code tokens, but the synthesis task requires searching over this representation, which can be difficult for longer, more complex programs. A programmer often starts by specifying high-level components of a program as a plan, then fills in the details of each component i.e. in string editing, a plan could be to extract the first name, then the last initial. We propose to use a sequence of latent variable tokens, called discrete latent codes, to represent such plans. Instead of having a fixed dictionary of codes, we let a model discover and learn what latent codes are useful and how to infer them from specification. Our hypothesis is that a discrete latent code – a sequence of discrete latent variables – can be a useful representation for search (van den Oord et al., 2017; Roy et al., 2018; Kaiser et al., 2018). This is because we can employ standard methods from discrete search, such as beam search, over a compact space of high-level plans and then over programs conditioned on the plan, in a two-level procedure. We posit that the high-level search can help to organize the search over programs. In the string editing example earlier, a model could be confident that it needs to extract the last initial, but is less sure about whether it needs to extract a first name. By changing one token in the latent code, two-level search can explore alternative programs that do different things in the beginning. Whereas in traditional single-level search, the model would need to change multi-token prefixes of the alternatives, which is difficult to achieve in limited budget search. We propose the Latent Programmer, a program synthesis method that uses learned discrete representations to guide search via a two-level synthesis. The Latent Programmer is trained by a self- supervised learning principle. First a discrete autoencoder is trained on a set of programs to learn discrete latent codes, and then an encoder is trained to map the specification of the synthesis task to these latent codes. Finally, at inference time, Latent Programmer uses a two-level search. Given the specification, the model first produces a L-best list of latent codes from the latent predictor, and uses them to synthesize potential programs. On two different program synthesis domains, we find empirically that the Latent Programmer improves synthesis accuracy by over 10% compared to standard sequence-to-sequence baselines as RobustFill (Devlin et al., 2017). We also find that our method improves diversity of predictions, as well as accuracy on long programs. 2 BACKGROUND Problem Setup The goal in program synthesis is to find a program in a given language that is consistent with a specification. Formally, we are given a domain specific language (DSL) which defines a space Y of programs. The task is described by a specification X ∈ X and is solved by some, possibly multiple, unknown program(s) Y ∈ Y . For example, each specification can be a set of input/output (I/O) examples denoted X = {(I1, O1), . . . (IN , ON )}. Then, we say that we have solved specification X if we found a program Y which correctly solves all the examples: Y (Ii) = Oi, ∀i = 1, . . . , N . As another example, each specification can be a natural language description of a task, and the corresponding program implements said task. An example string transformation synthesis task with four I/O examples together with a potential correct program in the string transformation DSL is shown in Figure 1. Vector Quantization Traditionally, neural program synthesis techniques process the input specification as a set of sequences and predicts the output program token-by-token (Devlin et al., 2017). In this work, we present a new approach for synthesis that performs structured planning in latent space using a discrete code. We conjecture that programs have an underlying discrete structure; specifically, programs are compositional and modular with components that get reused across different problems. Our approach leverages this structure to guide the search over large program spaces. Following works in computer vision (van den Oord et al., 2017; Roy et al., 2018), we discover such discrete structure by using a Vector Quantized Variational Autoencoder (VQ-VAE). VQ-VAEs work by feeding the intermediate representation of an autoencoder through a discretization bottleneck (van den Oord et al., 2017). For completeness, we provide background on VQ-VAEs below. In a VQ-VAE, latent codes are drawn from a discrete set of learned vectors c ∈ RK×D, or codebook. Each element in the codebook can be viewed as either a token with id k ∈ [K] or as an embedding ck ∈ RD. To generate the discrete codes, the continuous autoencoder output e is quantized via nearest-neighbor lookup into the codebook. Formally, the token id qk(e) and quantized embedding qc(e) are defined as qc(e) = cqk(e) where qk(e) = arg min k∈[K] ||e− ck||2. (1) For input x, the training loss for a VQ-VAE consists of: a reconstruction loss for the encoder-decoder weights, a codebook loss that encourages codebook embeddings to be close to the continuous vectors which are quantized to them, and a commitment loss that encourages the encoded input ec(x) to "commit" to codes i.e. not switch which discrete code it is quantized to. The loss is given by, L(c, θ, φ) = log pθ (x | qc(ecφ(x))) + ||sg(ecφ(x))− c)||22 + β||sg(c)− ecφ(x)||22, (2) where θ, φ are the parameters of the decoder and encoder, respectively, sg(·) is the stop gradient operator that fixes the operand from being updated by gradients, and β controls the strength of the commitment loss. To stabilize training, van den Oord et al. (2017) also proposed removing the codebook loss and set the codebook to an exponential moving average (EMA) of encoded inputs. 3 SYNTHESIS WITH DISCRETE LATENT VARIABLES We propose a two-level hierarchical approach to program synthesis that first performs high-level planning over an intermediate sequence, which is then used for fine-grained generation of the program. In our approach, a top-level module first infers a latent code, which gets used by a low-level module to generate the final program. 3.1 HIERARCHY OF TWO TRANSFORMERS Our proposed Latent Programmer (LP) architecture consists of two Transformers in a two-level structure. The architecture comprises of two modules: a latent predictor which produces a latent code, which can be interpreted as a course sketch of the program, and a latent program decoder, which generates a program conditioned on the code. The latent code consists of discrete latent variables as tokens, which we arbitrarily denote TOK_1,..., TOK_K, whose meanings are assigned during training. Both components use a Transformer architecture due to their impressive performance on natural language tasks (Vaswani et al., 2017). To help the model assign useful meanings to the latents, we also leverage a program encoder, which is only used during training. The program encoder ec(Y ) encodes the true program Y = [y1, y2, . . . , yT ] into a shorter sequence of discrete latent variables Z = [z1, z2, . . . , zS ], represented as codebook entries; that is, each zi ∈ RD is one of K entries in a codebook c. The latent sequence serves as the ground-truth high-level plan for the task. The function ec(Y ) is a Transformer encoder, followed by a stack of convolutions of stride 2, each halving the size of the sequence. We apply the convolution ` times, which reduces a T -length program to a latent sequence of length dT/2`e. This provides temporal abstraction, since the high-level planning actions are made only every 2` steps. In summary, the program encoder is given by ec(Y )← h`; hm ← Conv(hm−1) for m ∈ 1 . . . `; h0 ← TransformerEncoder(Y ). (3) Here TransformerEncoder(·) applies a stack of self-attention and feed-forward units on input embeddings via a residual path, described in detail by Vaswani et al. (2017). This will be used, along with the latent program decoder, as an autoencoder during training (see Section 3.2). The latent predictor lp(X) autoregressively predicts a coarse latent code lp(X) ∈ RS×K , conditioned on the program specification X . The latent predictor outputs a sequence of probabilities, which can be decoded using search algorithms such as beam search to generate a predicted latent code Z ′. This is different than the program encoder, which outputs a single sequence Z, because we use the latent predictor to organize search over latent codes; at test time, we will obtain a L-best list of latent token sequences from lp(X). The latent predictor is given by a stack of Transformer blocks with the specification X as inputs. Similarly, the latent program decoder d(Z,X) defines an autoregressive distribution over program tokens given the specification X and the coarse plan Z ∈ RS×K , represented as codebook entries. The decoder is a Transformer that jointly attends to the latent sequence and program specification. This is performed via two separate attention modules, whose outputs are concatenated into the hidden unit. Formally, given a partially generated program Y ′ = [y′1, y ′ 2, . . . , y ′ t−1], and the encoded specification E = TransformerEncoder(X), the latent program decoder performs ht = Concat (TransformerDecoder(Y ′, E)t−1,TransformerDecoder(Y ′, Z)t−1) , (4) where TransformerDecoder(x, y) denotes a Transformer decoder applied to outputs y while attending to inputs encoding x, and the subscript indexes an entry in the resulting output sequence. Finally, the distribution over output token k is given by dt(Z,X) = Softmax (W (ht)) , where W is a learned parameter matrix. Finally, the latent program decoder defines a distribution over programs autoregressively as p(Y |Z,X) = ∏ t p(yt|y<t, Z,X), where p(yt|y<t, Z,X) = dt(Z,X). When X is multiple I/O examples, each example is encoded asEi = TransformerDecoder(Ii, Oi). Then, a separate hidden state per I/O is computed following equation 4, followed by a late max-pool to get the final hidden state. Note that the program encoder and latent program decoder make up a VQ-VAE model of programs, with additional conditioning on the specification. The complete LP architecture is summarized in Figure 2, and an end-to-end example run of our architecture is shown in Figure 4. 3.2 TRAINING Our LP performs program synthesis using a two-level search, first over latent sequences then over programs. Given program specification, we want to train our latent predictor to produce an informative latent sequence from which our latent program decoder can accurately predict the true program. Our training loss for the LP model consists of three supervised objectives. The autoencoder loss ensures that the latent codes contain information about the program. It is a summation of the reconstruction loss between the autoencoder output d(qc(Y ), X) and true program Y , as well as a commitment loss to train the encoder output ec(Y ) to be close to codebook c. Like in Roy et al. (2018), codebook is not trained but set to the EMA of encoder outputs. This loss is similar to the loss function of a VQ-VAE as in equation 2, but also depends on specification X . This objective trains the latent tokens in the codebook so that they correspond to informative high-level actions, as well as make sure our latent program decoder can accurately recover true program given the specification and a plan comprising of such actions. The latent prediction loss ensures that latent codes can be predicted from specifications. It is a reconstruction loss between the distribution over latents predicted from the specification lp(X) and the autoencoded latents qk(ec(Y )) from the ground-truth program. This is a self-supervised approach that treats the autoencoded latent sequence as the ground-truth high-level plan, and trains the latent predictor to generate the plan using just the program specificationX . Note that the program encoder is only used in training, as at test time ec(Y ) is unknown, so the LP model uses lp(X) instead. Finally, the end-to-end loss ensures that programs can be predicted from specifications. This is especially important because in the reconstruction loss, the latent program decoder receives as input latent codes from the autoencoded latent sequences ec(Y ), whereas at test time, the decoder receives a latent code from the latent predictor lp(X). This can result in mistakes in the generated program since the decoder has never been exposed to noisy results from the latent predictor. The end-to-end loss alleviates this issue. The end-to-end loss is probability of the correct program Y when predicted from a soft-quantized latent code, given by lp(X)T c. This has the added benefit of allowing gradient to flow through the latent predictor, training it in an end-to-end way. In summary, the full loss for a training instance is L(c, θ, φ, ψ) = log pθ (Y | qc(ecφ(Y )), X) + β||sg(c)− ecφ(Y )||22︸ ︷︷ ︸ autoencoder + log p ( qk(ecφ(Y )) | lpψ(X) )︸ ︷︷ ︸ latent prediction + log pθ ( Y | lpψ(X)T c,X )︸ ︷︷ ︸ end-to-end (5) where we explicitly list out θ, φ, and ψ representing the parameters of the latent program decoder, program encoder, and latent decoder respectively. Furthermore, for the first 10K steps of training, we give embeddings of the ground-truth program Y , averaged over every 2` tokens, as the latent sequence instead of ec(Y ). This pre-training ensures that initially, the latent code carries some information about the program so that the attention to the code has reasonable gradients that can then to propagated to the program encoder afterward pre-training. Doing this was empirically shown to prevent the bypassing phenomenon where the latent code is ignored during decoding (Bahuleyan et al., 2017). 3.3 INFERENCE During inference, we use a multi-level variant of beam search to decode the output probabilities of our LP model. Standard beam search with beamB will generate the top-B most likely programs according to the model, and find the first one (if any) that is consistent with the specification (Parisotto et al., 2017; Devlin et al., 2017). In our case, we first perform beam search for L latent beams, then for bB/Lc programs per latent sequence. Note that during inference, the latent predictor will continue to generate latent tokens until an end-of-sequence token is produced. This means that the generated latent sequence does not necessarily satisfy having length dT/2`e as during training; however, we found the latent sequence lengths during training and evaluation to be close in practice. Setting L = B allows for the maximum exploration of the latent space, while setting L = 1 reduces our method to standard beam search, or exploitation of the most likely latent decoding. We choose L = √ B in our experiments, but explore the effect of various choices of L in Section 5.2. 4 RELATED WORK Program Synthesis Our work deals with program synthesis, which involves combinatorial search for programs that match a specification. Many different search methods have been explored within program synthesis, including search within a version-space algebra (Gulwani, 2011), bottom-up enumerative search (Udupa et al., 2013), stochastic search (Schkufza et al., 2013), genetic programming (Koza, 1994), or reducing the synthesis problem to logical satisfiability (Solar-Lezama et al., 2006). Neural program synthesis involves learning neural networks to predict function distributions to guide a synthesizer (Balog et al., 2017), or the program autoregressively in an end-to-end fashion (Parisotto et al., 2017; Devlin et al., 2017). SketchAdapt (Nye et al., 2019) combined these approaches by first generating a program sketch with holes, and then filling holes using a conventional synthesizer. Related to our work, DreamCoder (Ellis et al., 2020) iteratively builds a sketches using progressively more complicated primitives though a wake-sleep algorithm. Our work is closely related in spirit but fundamentally differs in two ways: (1) our sketches are comprised of a general latent vocabulary that is learned in a simple, self-supervised fashion, and (2) our method avoids enumerative search, which is prohibitively expensive for large program spaces. There is also a line of work that deals with learning to process partial programs in addition to the specification. In execution-guided program synthesis, the model guides iterative extensions of the partial programs until a matching one is found (Zohar & Wolf, 2018; Chen et al., 2019; Ellis et al., 2019). Balog et al. (2020) of late proposed a differentiable fixer that is trained to iteratively edit incorrect programs. We treat these works as complementary, and can be combined with ours to refine predictions. Discrete Latent Bottlenecks Variational autoencoders (VAE) were first introduced using continuous latent representations (Kingma & Welling, 2014; Rezende et al., 2014). Several promising approaches were proposed to use discrete bottlenecks instead, such as continuous relaxations of categorical distributions i.e. the Gumbel-Softmax reparametrization trick (Jang et al., 2017; Maddison et al., 2017). Recently, VQ-VAEs using nearest-neighbor search on a learned codebook (see Section 2 for more details) achieved impressive results almost matching continuous VAEs (van den Oord et al., 2017; Roy et al., 2018). Discrete bottlenecks have also been used for sentence compression (Miao & Blunsom, 2016) and text generation (Puduppully et al., 2019), but these works does not learn the semantics of the latent codes, like ours does. Within the domain of synthesis of chemical molecules, Gómez-Bombarelli et al. (2018) have applied Bayesian optimization within a continuous latent space to guide this structured prediction problem. Learning to search has also been considered in the structured prediction literature (Daumé et al., 2009; Chang et al., 2015; Ross et al., 2011), but to our knowledge, these works do not consider the problem of learning a discrete representation for search. Notably, VQ-VAE methods have been successfully used to encode natural language into discrete codes for faster decoding in machine translation (Kaiser et al., 2018). Our work similarly uses a VQ-VAE to learn a discrete code, but we use the learned code in a two-level search that improves accuracy. To do so, we propose a model that is autoregressive on both the latent and program space, and perform two-level beam search on latent codes and programs. The key novelty behind our work is that first searching over a learned discrete latent space can assist search over the complex program space; using a VQ-VAE as Kaiser et al. (2018) did enables us to do so. 5 EXPERIMENTS We now present the results of evaluating our Latent Programmer model in two test domains: synthesis of string transformation programs from examples and code generation from natural language descriptions. We compare our LP model against several strong baselines. RobustFill [LSTM] is a seq-to-seq LSTM with attention on the input specification, and trained to autoregressively predict the true program. The architecture is comparable to the RobustFill model designed originally for the string transformation tasks in our first domain (Devlin et al., 2017), but easily generalizes to all program synthesis domains. We detail the architecture in Appendix A. RobustFill [Transformer] alternatively uses a Transformer architecture, equivalent in architecture to the latent planner in our LP model, also trained to autoregressively predict the program. Transformers were found to perform much better than LSTMs in language tasks because they process the entire input as a whole, and have no risk of forgetting past dependencies (Vaswani et al., 2017). This baseline can be also be considered of an ablation of our LP model without any latent codes. The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels. We introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE. In both cases the latent space is continuous, and well-known combinatorial search algorithms such as beam search cannot search over the space. Latent RobustFill [AE] replaces the VQ-VAE component of our LP model with a generic autoencoder. This makes the latent code a sequence of continuous embeddings. The latent prediction loss in equation 5 is simply replaced by a squared error between the output of the autoencoder and the latent predictor. Performing beam search over the continuous latent space is intractable, so during inference we generate only one latent sequence per task; this is equivalent to two-level beam search described earlier with L = 1. In addition, because we cannot define an end-of-sequence token in the latent space, this baseline must be given knowledge of the true program length even during inference, and always generates a latent sequence of length dT/2`e. Latent RobustFill [VAE] substitutes the VQ-VAE component with a VAE (Kingma & Welling, 2014). This again produces a continuous latent space, but regularized to be distributed approximately as a standard Gaussian. Performing beam search is still intractable, but we can sample L latent sequences from the Gaussians determined by the VAE, and perform beam search on the programs afterwards. Again, we assume that the true program length is known during inference. 5.1 STRING TRANSFORMATION The first test domain is a string transformation DSL frequently studied in the program synthesis literature (Parisotto et al., 2017; Devlin et al., 2017; Balog et al., 2020). Tasks in this domain involve finding a program which maps a set of input strings to a corresponding set of outputs. Programs in the DSL are a concatenation of expressions that perform regex-based string transformations (see Appendix A for the full DSL). We perform experiments on a synthetic dataset generated by sampling programs from the DSL, then the corresponding I/O examples using an heuristic similar to the one used in NSPS (Parisotto et al., 2017) and RobustFill (Devlin et al., 2017) to ensure nonempty output for each input. We consider programs comprising of a concatenation of up to 10 expressions and limit the lengths of strings in the I/O to be at most 100 characters. All models have an embedding size of 128 and hidden size of 512, and the attention layers consist of 3 stacked layers with 4 heads each. For the LP model, we used a latent compression factor ` = 2 and vocabulary size K = 40. The models are trained on roughly 25M tasks, and evaluated on 1K held-out ones. In Table 1, we report the accuracy–the number of time a program was found conforming to the I/O examples–of our method against the baselines. Across all beam sizes, our LP model performed 5-7 percentage points better (over 10% of baseline accuracy) than the next best model. From our ablative study, we see that having two-level using discrete latent codes was important, as the baselines over continuous latent spaces performed comparably to the traditional RobustFill model. Recently, SketchAdapt also proposed two-level search (Nye et al., 2019), but in the top-level, it performs beam search over program space augmented with a HOLE token. In constrast, our method searches over a learned, general latent space. During low-level search, SketchAdapt enumerates partial programs to co-opt the HOLE tokens using a learned syn- thesizer similar to DeepCoder (Balog et al., 2017), whereas we again perform beam search. To compare the two, we evaluate our LP model on samples generated according to Nye et al. (2019), which slightly modifies the DSL to increase the performance of synthesizers, and report results in Table 2. Since enumeration can be done more quickly than beam search, we let SketchAdapt synthesize 3, 000 programs using B top-level beams, whereas our LP model can only generate B programs. Our LP model is able to outperform SketchAdapt even in the modified DSL. 5.2 ANALYSIS We conduct extensive analysis to better understand our LP model in terms of learning, the ability to generate long programs, and diversity in the beams. All results are reported with beam size B = 10. Model Size Our LP model uses an additional latent code for decoding, which introduces additional parameters into the model than the baseline RobustFill model. To make a fair comparison, we vary the embedding and hidden dimension of all of our evaluated methods, and compare the effect of the number of trainable parameters on the accuracy. Figure 3(a) shows that all methods respond well to an increase in model size. Nevertheless, we see that even when normalized for size, our LP model outperforms baselines by a significant margin. Program Length Prior work has shown that program length is a reasonable proxy measure of problem difficulty. We hypothesize that using latent codes is most beneficial when generating long programs. Figure 3(b) shows how ground-truth program length affects the accuracy of our LP model compared to RobustFill, which lacks latent codes. As expected, accuracy decreases with problem complexity. Perhaps surprisingly, though, we see a large improvement in our LP model’s ability to handle more complex problems. In Figure 4, we also show an illustrative example in the domain where our LP model found a valid program whereas the RobustFill model did not. In this example, the ground-truth program was long but had a repetitive underlying structure. Our LP model correctly detected this structure, as evidenced by the predicted latent sequence. We show additional examples LP GetAll_NUMBER | Const(:) | GetToken_ALL_CAPS_1 | Const(.) | GetToken_ALL_CAPS_2 | Const(.) | GetToken_ALL_CAPS_-1 | Const(.) in Figure 9 of Appendix B. It is important to note that our method allows tokens in the discrete latent code to have arbitrary meaning, yielding rich and expressive latent representations. However, the trade-off is that because the latent codes were not grounded, it is difficult to objectively interpret the latent codes. Grounding the latent space to induce interpretability is an avenue for future work. Latent Beam Size In multi-level beam search of beam size B, first L latent beams are decoded, then bB/Lc programs per latent sequence. The latent beam size L controls how much search is performed over latent space. We theorize that higher L will produce more diverse beams; however, too high L can be harmful in missing programs with high joint log-probability. We show the effect of latent beam size on both the beam-10 accuracy and a proxy measure for diversity. Following prior work, we measure diversity by counting the number of distinct n-grams in the beams, normalized by the total number of tokens to bias against long programs (Vijayakumar et al., 2018). We report the results varying L for B = 10 in Figure 5(a). As expected, increasing the latent beam size L improves diversity of output programs, but excessively large L harms the final accuracy. An important observation is that the L = 1 case, where one latent code is used to decode all programs, performs similarly to baseline RobustFill. In this extreme, no search is performed over the latent space, and our proposed two-level search reduces to only searching over programs; this is further evidence that explicitly having two-level search is critical to the LP model’s improved performance. Latent Length and Vocabulary Size Since the discretization bottleneck is a critical component in generating latent codes in our LP model, we also investigate its performance in conjunction with different settings of hyperparameters. Two important variables for the VQ are the latent length compression factor c, and size of latent vocabulary K. If c is too small, the latent space becomes too large to search; on the other hard, too large c can mean individual latent tokens cannot encoded the information needed to reconstruct the program. Similarly, we expect that too small of a vocabulary K can limit the expressiveness of the latent space, but too large K can make predicting the correct latent code too difficult. We confirm this in our evaluations in Figure 5(b) and Figure 5(c). 5.3 PYTHON CODE GENERATION Our next test domain is a Python code generation (CG) task, which involves generating code for a function that implements a natural-language specification. The dataset used consists of 111K python examples, which consist of a docstring and corresponding code snippet, collected from Github (Wan et al., 2018). An example docstring and program from the dataset is shown in Figure 6. We used a language-independent tokenizer jointly on data (Kudo & Richardson, 2018), and processed the dataset into a vocabulary of 35K sub-word tokens. Furthermore, following Wei et al. (2019), we set the maximum length of the programs to be 150 tokens resulting in 85K examples. Across all models, we set the embedding size to be 256 and hidden size to be 512, and the attention layers consist of 6 stacked layers with 16 heads each, similar to in neural machine translation (Vaswani et al., 2017). For the LP model, we used a latent compression factor c = 2 and vocabulary size K = 400 after a hyperparameter search. The models are evaluated on 1K held-out examples. We initially found that it was difficult for the program encoder to detect latent sequence structure in the ground-truth programs as is due to the noise in variable names. To remedy this, we used an abstract syntax tree (AST) parser on the ground-truth programs to replace the i-th function argument and variable appearing the program with the token ARG_i and VAR_i, respectively. This was only used in training the program encoder and did not impact evaluation. Method BLEU B = 1 10 100 Base (Wei et al., 2019) 10.4 - - Dual (Wei et al., 2019) 12.1 - - RobustFill [LSTM] 11.4 14.8 16.0 RobustFill [Transformer] 12.1 15.5 17.2 Latent Programmer 14.0 18.6 21.3 Table 3: BLEU score on code generation task. We evaluate performance by computing the best BLEU score among the output beams (Papineni et al., 2002). We computed BLEU as the geometric mean of n-gram matching precision scores up to n = 4. Table 3 shows that our LP model outperforms the baselines. From the results, it can be seen that this is a difficult task, which may be due to the ambiguity in specifying code from a short docstring description. As evidence, we addition- ally include results from a recent work that proposed seq-to-seq CG models on the same data that performed similar to our baselines (Wei et al., 2019). These results show that improvements due to the LP model exist even in difficult CG domains. For example docstrings and code generated by the LP Model, refer to Figure 9 in Appendix B. 6 CONCLUSION In this work we proposed the Latent Programmer (LP), a novel neural program synthesis technique that leverages a structured latent sequences to guide search. The LP model consists of a latent predictor, which maps the input specification to a sequence of discrete latent variables, and a latent program decoder that generates a program token-by-token while attending to the latent sequence. The latent predictor was trained via a self-supervised method in which a discrete autoencoder of programs was learned using a discrete bottleneck, specifically a VQ-VAE (van den Oord et al., 2017), and the latent predictor tries to predict the autoencoded sequence as if it were the ground-truth. During inference, the LP model first searches in latent space for discrete codes, then conditions on those codes to search over programs. Empirically, we showed that the Latent Programmer outperforms state-ofthe-art baselines as Robustfill (Devlin et al., 2017), which ignore latent structure. Exciting future avenues of investigation include achieving better performance by grounding the latent vocabulary and generalizing our method to other tasks in natural language and structured prediction. A EXTENDED DESCRIPTION OF DSL AND ROBUSTFILL MODEL The DSL for string transformations we use is the same as used in RobustFill (Devlin et al., 2017), and is shown in Figure 7. The top-level operator for programs in the DSL is a Concat operator that concatenates a random number (up to 10) of expressions ei. Each expression e can either be a substring expression f , a nesting expression n, or a constant string c. A substring expression can either return the substring between left k1 and right k2 indices, or between the i1-th occurence of regex r1 and i2-th occurence of regex r2. The nesting expressions also return substrings of the input, such as extracting the i-th occurrence of a regex, but can also be composed with existing substring or nesting expressions for more complex string transformations. RobustFill Model RobustFill (Devlin et al., 2017) is a seq-to-seq neural network that uses a encoder-decoder architecture where the encoder computes a representation of the input e(X), and the decoder autoregressively generates the output given the source representation, i.e. conditional likelihood of Y = [y1, . . . , yT ] decomposes as p(Y |X) = ∏T t=1 p(yt|y<t, X). In RobustFill, the probability of decoding each token yt is given by p(yt|y<t, X) = Softmax (W (ht)) with W being the projection onto logits, or unnormalized log probabilities. The hidden representation ht is an LSTM hidden unit given by, Et = Attention (ht−1, e(X)) , ht = LSTM(ht−1, Et) . Here e(X) is the sequence of hidden states after processing the specifications with an LSTM encoder, and Attention (Q,V ) denotes the scaled dot-product attention with query Q and key-value sequence V (Bahdanau et al., 2016). In the case of X being multiple I/O examples, the RobustFill model of Devlin et al. (2017) uses double attention sIt,i = Attention (ht−1, e(Ii)) sOt,i = Attention ( Concat ( ht−1, s I t,i ) , e(Oi) ) ht,i = LSTM ( ht−1,Concat ( sIt,i, s O t,i )) ∀1 ≤ i ≤ N, and hidden states are pooled across examples before being fed into the final softmax layer, or ht = maxpool1≤i≤N tanh(V (ht,i)) , where V is another projection. B EXAMPLES OF GENERATED PROGRAMS AND LATENT CODES
1. What is the novel approach introduced by the Latent Programmer system in program synthesis? 2. What are the strengths of the proposed method, particularly in its simplicity and representational scheme? 3. What are the weaknesses of the paper regarding the comparison with baselines and ablations? 4. How can the performance difference between the Latent Programmer and transformer RobustFill baseline be disentangled? 5. What additional experiments or variations of the model could help strengthen the paper and provide valuable insights to the neural program synthesis community?
Review
Review Edit: I have increased my score to 7. This paper introduces a novel program synthesis system called the Latent Programmer, which uses discrete latent codes as a representational scheme to solve program synthesis problems in two domains: string transformations from examples and code generation from language descriptions. Strengths: -The paper is relatively clear. -The approach is novel. -The results seem to support the claim that this model outperforms baselines (although it would help to report the results of multiple runs with standard error). -The relative simplicity of the approach is a plus; it doesn't seem that it would be terribly difficult for a researcher to adopt this technique to a new problem. Weaknesses: I think the baselines/ablations could be more complete. For example, it seems that the gains over the RobustFill baselines could be due to any of 3 factors: 1) use of discrete representations 2) the use of an autoencoding loss, or 3) the ability to search through latent representations at test time. Unless I'm mistaken, compared to LP, the transformer RobustFill baseline differs in terms of both (1) and (2): RobustFill does not use discrete latent codes, and it does not use the autoencoding or latent prediction losses. As written, the paper seems to assume that (1) is the primary reason for the performance difference ("[the transformer RobustFill model] can also be considered of an ablation of our LP model without latent codes]"). However, I think these two factors need to be better disentangled, in order to determine which contributes most to the performance. Can a RobustFill model be trained with an additional auto-encoding loss, so that its loss function is more analogous to LP? Similarly, how might a continuous latent variable model, such as a VAE, perform on the string editing tasks? Similarly, it seems there is evidence that (3) is an important factor: in Figure 5, when doing a beam search of size 10, but only searching in the decoder space and keeping the latents fixed (L=1), the performance seems identical to the transformer RobustFill baseline. LP seems to beat baselines with B=1. What are the results for B=100 and L=1? I think that disentangling these factors would really strengthen the paper, and could also be of large value to the neural program synthesis community. Summary: I think this is an interesting line of work with promising results. However, I do think that a baseline which uses an autoencoding loss but does not use discrete latent codes is an important ablation to perform. I therefore recommend a weak accept, and I'd be willing to raise my score if my concerns about baselines were addressed.
ICLR
Title Latent Programmer: Discrete Latent Codes for Program Synthesis Abstract In many sequence learning tasks, such as program synthesis and document summarization, a key problem is searching over a large space of possible output sequences. We propose to learn representations of the outputs that are specifically meant for search: rich enough to specify the desired output but compact enough to make search more efficient. Discrete latent codes are appealing for this purpose, as they naturally allow sophisticated combinatorial search strategies. The latent codes are learned using a self-supervised learning principle, in which first a discrete autoencoder is trained on the output sequences, and then the resulting latent codes are used as intermediate targets for the end-to-end sequence prediction task. Based on these insights, we introduce the Latent Programmer, a program synthesis method that first predicts a discrete latent code from input/output examples, and then generates the program in the target language. We evaluate the Latent Programmer on two domains: synthesis of string transformation programs, and generation of programs from natural language descriptions. We demonstrate that the discrete latent representation significantly improves synthesis accuracy. 1 INTRODUCTION Our focus in this paper is program synthesis, one of the longstanding grand challenges of artificial intelligence research (Manna & Waldinger, 1971; Summers, 1977). The objective of program synthesis is to automatically write a program given a specification of its intended behavior, such as a natural language description or a small set of input-output examples. Search is an especially difficult challenge within program synthesis (Alur et al., 2013; Gulwani et al., 2017), and many different methods have been explored, including top-down search (Lee et al., 2018), bottom up search (Udupa et al., 2013), beam search (Devlin et al., 2017), and many others (see Section 2). We take a different philosophy: Can we learn a representation of programs specifically to help search? A natural way of representing a program is as a sequence of source code tokens, but the synthesis task requires searching over this representation, which can be difficult for longer, more complex programs. A programmer often starts by specifying high-level components of a program as a plan, then fills in the details of each component i.e. in string editing, a plan could be to extract the first name, then the last initial. We propose to use a sequence of latent variable tokens, called discrete latent codes, to represent such plans. Instead of having a fixed dictionary of codes, we let a model discover and learn what latent codes are useful and how to infer them from specification. Our hypothesis is that a discrete latent code – a sequence of discrete latent variables – can be a useful representation for search (van den Oord et al., 2017; Roy et al., 2018; Kaiser et al., 2018). This is because we can employ standard methods from discrete search, such as beam search, over a compact space of high-level plans and then over programs conditioned on the plan, in a two-level procedure. We posit that the high-level search can help to organize the search over programs. In the string editing example earlier, a model could be confident that it needs to extract the last initial, but is less sure about whether it needs to extract a first name. By changing one token in the latent code, two-level search can explore alternative programs that do different things in the beginning. Whereas in traditional single-level search, the model would need to change multi-token prefixes of the alternatives, which is difficult to achieve in limited budget search. We propose the Latent Programmer, a program synthesis method that uses learned discrete representations to guide search via a two-level synthesis. The Latent Programmer is trained by a self- supervised learning principle. First a discrete autoencoder is trained on a set of programs to learn discrete latent codes, and then an encoder is trained to map the specification of the synthesis task to these latent codes. Finally, at inference time, Latent Programmer uses a two-level search. Given the specification, the model first produces a L-best list of latent codes from the latent predictor, and uses them to synthesize potential programs. On two different program synthesis domains, we find empirically that the Latent Programmer improves synthesis accuracy by over 10% compared to standard sequence-to-sequence baselines as RobustFill (Devlin et al., 2017). We also find that our method improves diversity of predictions, as well as accuracy on long programs. 2 BACKGROUND Problem Setup The goal in program synthesis is to find a program in a given language that is consistent with a specification. Formally, we are given a domain specific language (DSL) which defines a space Y of programs. The task is described by a specification X ∈ X and is solved by some, possibly multiple, unknown program(s) Y ∈ Y . For example, each specification can be a set of input/output (I/O) examples denoted X = {(I1, O1), . . . (IN , ON )}. Then, we say that we have solved specification X if we found a program Y which correctly solves all the examples: Y (Ii) = Oi, ∀i = 1, . . . , N . As another example, each specification can be a natural language description of a task, and the corresponding program implements said task. An example string transformation synthesis task with four I/O examples together with a potential correct program in the string transformation DSL is shown in Figure 1. Vector Quantization Traditionally, neural program synthesis techniques process the input specification as a set of sequences and predicts the output program token-by-token (Devlin et al., 2017). In this work, we present a new approach for synthesis that performs structured planning in latent space using a discrete code. We conjecture that programs have an underlying discrete structure; specifically, programs are compositional and modular with components that get reused across different problems. Our approach leverages this structure to guide the search over large program spaces. Following works in computer vision (van den Oord et al., 2017; Roy et al., 2018), we discover such discrete structure by using a Vector Quantized Variational Autoencoder (VQ-VAE). VQ-VAEs work by feeding the intermediate representation of an autoencoder through a discretization bottleneck (van den Oord et al., 2017). For completeness, we provide background on VQ-VAEs below. In a VQ-VAE, latent codes are drawn from a discrete set of learned vectors c ∈ RK×D, or codebook. Each element in the codebook can be viewed as either a token with id k ∈ [K] or as an embedding ck ∈ RD. To generate the discrete codes, the continuous autoencoder output e is quantized via nearest-neighbor lookup into the codebook. Formally, the token id qk(e) and quantized embedding qc(e) are defined as qc(e) = cqk(e) where qk(e) = arg min k∈[K] ||e− ck||2. (1) For input x, the training loss for a VQ-VAE consists of: a reconstruction loss for the encoder-decoder weights, a codebook loss that encourages codebook embeddings to be close to the continuous vectors which are quantized to them, and a commitment loss that encourages the encoded input ec(x) to "commit" to codes i.e. not switch which discrete code it is quantized to. The loss is given by, L(c, θ, φ) = log pθ (x | qc(ecφ(x))) + ||sg(ecφ(x))− c)||22 + β||sg(c)− ecφ(x)||22, (2) where θ, φ are the parameters of the decoder and encoder, respectively, sg(·) is the stop gradient operator that fixes the operand from being updated by gradients, and β controls the strength of the commitment loss. To stabilize training, van den Oord et al. (2017) also proposed removing the codebook loss and set the codebook to an exponential moving average (EMA) of encoded inputs. 3 SYNTHESIS WITH DISCRETE LATENT VARIABLES We propose a two-level hierarchical approach to program synthesis that first performs high-level planning over an intermediate sequence, which is then used for fine-grained generation of the program. In our approach, a top-level module first infers a latent code, which gets used by a low-level module to generate the final program. 3.1 HIERARCHY OF TWO TRANSFORMERS Our proposed Latent Programmer (LP) architecture consists of two Transformers in a two-level structure. The architecture comprises of two modules: a latent predictor which produces a latent code, which can be interpreted as a course sketch of the program, and a latent program decoder, which generates a program conditioned on the code. The latent code consists of discrete latent variables as tokens, which we arbitrarily denote TOK_1,..., TOK_K, whose meanings are assigned during training. Both components use a Transformer architecture due to their impressive performance on natural language tasks (Vaswani et al., 2017). To help the model assign useful meanings to the latents, we also leverage a program encoder, which is only used during training. The program encoder ec(Y ) encodes the true program Y = [y1, y2, . . . , yT ] into a shorter sequence of discrete latent variables Z = [z1, z2, . . . , zS ], represented as codebook entries; that is, each zi ∈ RD is one of K entries in a codebook c. The latent sequence serves as the ground-truth high-level plan for the task. The function ec(Y ) is a Transformer encoder, followed by a stack of convolutions of stride 2, each halving the size of the sequence. We apply the convolution ` times, which reduces a T -length program to a latent sequence of length dT/2`e. This provides temporal abstraction, since the high-level planning actions are made only every 2` steps. In summary, the program encoder is given by ec(Y )← h`; hm ← Conv(hm−1) for m ∈ 1 . . . `; h0 ← TransformerEncoder(Y ). (3) Here TransformerEncoder(·) applies a stack of self-attention and feed-forward units on input embeddings via a residual path, described in detail by Vaswani et al. (2017). This will be used, along with the latent program decoder, as an autoencoder during training (see Section 3.2). The latent predictor lp(X) autoregressively predicts a coarse latent code lp(X) ∈ RS×K , conditioned on the program specification X . The latent predictor outputs a sequence of probabilities, which can be decoded using search algorithms such as beam search to generate a predicted latent code Z ′. This is different than the program encoder, which outputs a single sequence Z, because we use the latent predictor to organize search over latent codes; at test time, we will obtain a L-best list of latent token sequences from lp(X). The latent predictor is given by a stack of Transformer blocks with the specification X as inputs. Similarly, the latent program decoder d(Z,X) defines an autoregressive distribution over program tokens given the specification X and the coarse plan Z ∈ RS×K , represented as codebook entries. The decoder is a Transformer that jointly attends to the latent sequence and program specification. This is performed via two separate attention modules, whose outputs are concatenated into the hidden unit. Formally, given a partially generated program Y ′ = [y′1, y ′ 2, . . . , y ′ t−1], and the encoded specification E = TransformerEncoder(X), the latent program decoder performs ht = Concat (TransformerDecoder(Y ′, E)t−1,TransformerDecoder(Y ′, Z)t−1) , (4) where TransformerDecoder(x, y) denotes a Transformer decoder applied to outputs y while attending to inputs encoding x, and the subscript indexes an entry in the resulting output sequence. Finally, the distribution over output token k is given by dt(Z,X) = Softmax (W (ht)) , where W is a learned parameter matrix. Finally, the latent program decoder defines a distribution over programs autoregressively as p(Y |Z,X) = ∏ t p(yt|y<t, Z,X), where p(yt|y<t, Z,X) = dt(Z,X). When X is multiple I/O examples, each example is encoded asEi = TransformerDecoder(Ii, Oi). Then, a separate hidden state per I/O is computed following equation 4, followed by a late max-pool to get the final hidden state. Note that the program encoder and latent program decoder make up a VQ-VAE model of programs, with additional conditioning on the specification. The complete LP architecture is summarized in Figure 2, and an end-to-end example run of our architecture is shown in Figure 4. 3.2 TRAINING Our LP performs program synthesis using a two-level search, first over latent sequences then over programs. Given program specification, we want to train our latent predictor to produce an informative latent sequence from which our latent program decoder can accurately predict the true program. Our training loss for the LP model consists of three supervised objectives. The autoencoder loss ensures that the latent codes contain information about the program. It is a summation of the reconstruction loss between the autoencoder output d(qc(Y ), X) and true program Y , as well as a commitment loss to train the encoder output ec(Y ) to be close to codebook c. Like in Roy et al. (2018), codebook is not trained but set to the EMA of encoder outputs. This loss is similar to the loss function of a VQ-VAE as in equation 2, but also depends on specification X . This objective trains the latent tokens in the codebook so that they correspond to informative high-level actions, as well as make sure our latent program decoder can accurately recover true program given the specification and a plan comprising of such actions. The latent prediction loss ensures that latent codes can be predicted from specifications. It is a reconstruction loss between the distribution over latents predicted from the specification lp(X) and the autoencoded latents qk(ec(Y )) from the ground-truth program. This is a self-supervised approach that treats the autoencoded latent sequence as the ground-truth high-level plan, and trains the latent predictor to generate the plan using just the program specificationX . Note that the program encoder is only used in training, as at test time ec(Y ) is unknown, so the LP model uses lp(X) instead. Finally, the end-to-end loss ensures that programs can be predicted from specifications. This is especially important because in the reconstruction loss, the latent program decoder receives as input latent codes from the autoencoded latent sequences ec(Y ), whereas at test time, the decoder receives a latent code from the latent predictor lp(X). This can result in mistakes in the generated program since the decoder has never been exposed to noisy results from the latent predictor. The end-to-end loss alleviates this issue. The end-to-end loss is probability of the correct program Y when predicted from a soft-quantized latent code, given by lp(X)T c. This has the added benefit of allowing gradient to flow through the latent predictor, training it in an end-to-end way. In summary, the full loss for a training instance is L(c, θ, φ, ψ) = log pθ (Y | qc(ecφ(Y )), X) + β||sg(c)− ecφ(Y )||22︸ ︷︷ ︸ autoencoder + log p ( qk(ecφ(Y )) | lpψ(X) )︸ ︷︷ ︸ latent prediction + log pθ ( Y | lpψ(X)T c,X )︸ ︷︷ ︸ end-to-end (5) where we explicitly list out θ, φ, and ψ representing the parameters of the latent program decoder, program encoder, and latent decoder respectively. Furthermore, for the first 10K steps of training, we give embeddings of the ground-truth program Y , averaged over every 2` tokens, as the latent sequence instead of ec(Y ). This pre-training ensures that initially, the latent code carries some information about the program so that the attention to the code has reasonable gradients that can then to propagated to the program encoder afterward pre-training. Doing this was empirically shown to prevent the bypassing phenomenon where the latent code is ignored during decoding (Bahuleyan et al., 2017). 3.3 INFERENCE During inference, we use a multi-level variant of beam search to decode the output probabilities of our LP model. Standard beam search with beamB will generate the top-B most likely programs according to the model, and find the first one (if any) that is consistent with the specification (Parisotto et al., 2017; Devlin et al., 2017). In our case, we first perform beam search for L latent beams, then for bB/Lc programs per latent sequence. Note that during inference, the latent predictor will continue to generate latent tokens until an end-of-sequence token is produced. This means that the generated latent sequence does not necessarily satisfy having length dT/2`e as during training; however, we found the latent sequence lengths during training and evaluation to be close in practice. Setting L = B allows for the maximum exploration of the latent space, while setting L = 1 reduces our method to standard beam search, or exploitation of the most likely latent decoding. We choose L = √ B in our experiments, but explore the effect of various choices of L in Section 5.2. 4 RELATED WORK Program Synthesis Our work deals with program synthesis, which involves combinatorial search for programs that match a specification. Many different search methods have been explored within program synthesis, including search within a version-space algebra (Gulwani, 2011), bottom-up enumerative search (Udupa et al., 2013), stochastic search (Schkufza et al., 2013), genetic programming (Koza, 1994), or reducing the synthesis problem to logical satisfiability (Solar-Lezama et al., 2006). Neural program synthesis involves learning neural networks to predict function distributions to guide a synthesizer (Balog et al., 2017), or the program autoregressively in an end-to-end fashion (Parisotto et al., 2017; Devlin et al., 2017). SketchAdapt (Nye et al., 2019) combined these approaches by first generating a program sketch with holes, and then filling holes using a conventional synthesizer. Related to our work, DreamCoder (Ellis et al., 2020) iteratively builds a sketches using progressively more complicated primitives though a wake-sleep algorithm. Our work is closely related in spirit but fundamentally differs in two ways: (1) our sketches are comprised of a general latent vocabulary that is learned in a simple, self-supervised fashion, and (2) our method avoids enumerative search, which is prohibitively expensive for large program spaces. There is also a line of work that deals with learning to process partial programs in addition to the specification. In execution-guided program synthesis, the model guides iterative extensions of the partial programs until a matching one is found (Zohar & Wolf, 2018; Chen et al., 2019; Ellis et al., 2019). Balog et al. (2020) of late proposed a differentiable fixer that is trained to iteratively edit incorrect programs. We treat these works as complementary, and can be combined with ours to refine predictions. Discrete Latent Bottlenecks Variational autoencoders (VAE) were first introduced using continuous latent representations (Kingma & Welling, 2014; Rezende et al., 2014). Several promising approaches were proposed to use discrete bottlenecks instead, such as continuous relaxations of categorical distributions i.e. the Gumbel-Softmax reparametrization trick (Jang et al., 2017; Maddison et al., 2017). Recently, VQ-VAEs using nearest-neighbor search on a learned codebook (see Section 2 for more details) achieved impressive results almost matching continuous VAEs (van den Oord et al., 2017; Roy et al., 2018). Discrete bottlenecks have also been used for sentence compression (Miao & Blunsom, 2016) and text generation (Puduppully et al., 2019), but these works does not learn the semantics of the latent codes, like ours does. Within the domain of synthesis of chemical molecules, Gómez-Bombarelli et al. (2018) have applied Bayesian optimization within a continuous latent space to guide this structured prediction problem. Learning to search has also been considered in the structured prediction literature (Daumé et al., 2009; Chang et al., 2015; Ross et al., 2011), but to our knowledge, these works do not consider the problem of learning a discrete representation for search. Notably, VQ-VAE methods have been successfully used to encode natural language into discrete codes for faster decoding in machine translation (Kaiser et al., 2018). Our work similarly uses a VQ-VAE to learn a discrete code, but we use the learned code in a two-level search that improves accuracy. To do so, we propose a model that is autoregressive on both the latent and program space, and perform two-level beam search on latent codes and programs. The key novelty behind our work is that first searching over a learned discrete latent space can assist search over the complex program space; using a VQ-VAE as Kaiser et al. (2018) did enables us to do so. 5 EXPERIMENTS We now present the results of evaluating our Latent Programmer model in two test domains: synthesis of string transformation programs from examples and code generation from natural language descriptions. We compare our LP model against several strong baselines. RobustFill [LSTM] is a seq-to-seq LSTM with attention on the input specification, and trained to autoregressively predict the true program. The architecture is comparable to the RobustFill model designed originally for the string transformation tasks in our first domain (Devlin et al., 2017), but easily generalizes to all program synthesis domains. We detail the architecture in Appendix A. RobustFill [Transformer] alternatively uses a Transformer architecture, equivalent in architecture to the latent planner in our LP model, also trained to autoregressively predict the program. Transformers were found to perform much better than LSTMs in language tasks because they process the entire input as a whole, and have no risk of forgetting past dependencies (Vaswani et al., 2017). This baseline can be also be considered of an ablation of our LP model without any latent codes. The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels. We introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE. In both cases the latent space is continuous, and well-known combinatorial search algorithms such as beam search cannot search over the space. Latent RobustFill [AE] replaces the VQ-VAE component of our LP model with a generic autoencoder. This makes the latent code a sequence of continuous embeddings. The latent prediction loss in equation 5 is simply replaced by a squared error between the output of the autoencoder and the latent predictor. Performing beam search over the continuous latent space is intractable, so during inference we generate only one latent sequence per task; this is equivalent to two-level beam search described earlier with L = 1. In addition, because we cannot define an end-of-sequence token in the latent space, this baseline must be given knowledge of the true program length even during inference, and always generates a latent sequence of length dT/2`e. Latent RobustFill [VAE] substitutes the VQ-VAE component with a VAE (Kingma & Welling, 2014). This again produces a continuous latent space, but regularized to be distributed approximately as a standard Gaussian. Performing beam search is still intractable, but we can sample L latent sequences from the Gaussians determined by the VAE, and perform beam search on the programs afterwards. Again, we assume that the true program length is known during inference. 5.1 STRING TRANSFORMATION The first test domain is a string transformation DSL frequently studied in the program synthesis literature (Parisotto et al., 2017; Devlin et al., 2017; Balog et al., 2020). Tasks in this domain involve finding a program which maps a set of input strings to a corresponding set of outputs. Programs in the DSL are a concatenation of expressions that perform regex-based string transformations (see Appendix A for the full DSL). We perform experiments on a synthetic dataset generated by sampling programs from the DSL, then the corresponding I/O examples using an heuristic similar to the one used in NSPS (Parisotto et al., 2017) and RobustFill (Devlin et al., 2017) to ensure nonempty output for each input. We consider programs comprising of a concatenation of up to 10 expressions and limit the lengths of strings in the I/O to be at most 100 characters. All models have an embedding size of 128 and hidden size of 512, and the attention layers consist of 3 stacked layers with 4 heads each. For the LP model, we used a latent compression factor ` = 2 and vocabulary size K = 40. The models are trained on roughly 25M tasks, and evaluated on 1K held-out ones. In Table 1, we report the accuracy–the number of time a program was found conforming to the I/O examples–of our method against the baselines. Across all beam sizes, our LP model performed 5-7 percentage points better (over 10% of baseline accuracy) than the next best model. From our ablative study, we see that having two-level using discrete latent codes was important, as the baselines over continuous latent spaces performed comparably to the traditional RobustFill model. Recently, SketchAdapt also proposed two-level search (Nye et al., 2019), but in the top-level, it performs beam search over program space augmented with a HOLE token. In constrast, our method searches over a learned, general latent space. During low-level search, SketchAdapt enumerates partial programs to co-opt the HOLE tokens using a learned syn- thesizer similar to DeepCoder (Balog et al., 2017), whereas we again perform beam search. To compare the two, we evaluate our LP model on samples generated according to Nye et al. (2019), which slightly modifies the DSL to increase the performance of synthesizers, and report results in Table 2. Since enumeration can be done more quickly than beam search, we let SketchAdapt synthesize 3, 000 programs using B top-level beams, whereas our LP model can only generate B programs. Our LP model is able to outperform SketchAdapt even in the modified DSL. 5.2 ANALYSIS We conduct extensive analysis to better understand our LP model in terms of learning, the ability to generate long programs, and diversity in the beams. All results are reported with beam size B = 10. Model Size Our LP model uses an additional latent code for decoding, which introduces additional parameters into the model than the baseline RobustFill model. To make a fair comparison, we vary the embedding and hidden dimension of all of our evaluated methods, and compare the effect of the number of trainable parameters on the accuracy. Figure 3(a) shows that all methods respond well to an increase in model size. Nevertheless, we see that even when normalized for size, our LP model outperforms baselines by a significant margin. Program Length Prior work has shown that program length is a reasonable proxy measure of problem difficulty. We hypothesize that using latent codes is most beneficial when generating long programs. Figure 3(b) shows how ground-truth program length affects the accuracy of our LP model compared to RobustFill, which lacks latent codes. As expected, accuracy decreases with problem complexity. Perhaps surprisingly, though, we see a large improvement in our LP model’s ability to handle more complex problems. In Figure 4, we also show an illustrative example in the domain where our LP model found a valid program whereas the RobustFill model did not. In this example, the ground-truth program was long but had a repetitive underlying structure. Our LP model correctly detected this structure, as evidenced by the predicted latent sequence. We show additional examples LP GetAll_NUMBER | Const(:) | GetToken_ALL_CAPS_1 | Const(.) | GetToken_ALL_CAPS_2 | Const(.) | GetToken_ALL_CAPS_-1 | Const(.) in Figure 9 of Appendix B. It is important to note that our method allows tokens in the discrete latent code to have arbitrary meaning, yielding rich and expressive latent representations. However, the trade-off is that because the latent codes were not grounded, it is difficult to objectively interpret the latent codes. Grounding the latent space to induce interpretability is an avenue for future work. Latent Beam Size In multi-level beam search of beam size B, first L latent beams are decoded, then bB/Lc programs per latent sequence. The latent beam size L controls how much search is performed over latent space. We theorize that higher L will produce more diverse beams; however, too high L can be harmful in missing programs with high joint log-probability. We show the effect of latent beam size on both the beam-10 accuracy and a proxy measure for diversity. Following prior work, we measure diversity by counting the number of distinct n-grams in the beams, normalized by the total number of tokens to bias against long programs (Vijayakumar et al., 2018). We report the results varying L for B = 10 in Figure 5(a). As expected, increasing the latent beam size L improves diversity of output programs, but excessively large L harms the final accuracy. An important observation is that the L = 1 case, where one latent code is used to decode all programs, performs similarly to baseline RobustFill. In this extreme, no search is performed over the latent space, and our proposed two-level search reduces to only searching over programs; this is further evidence that explicitly having two-level search is critical to the LP model’s improved performance. Latent Length and Vocabulary Size Since the discretization bottleneck is a critical component in generating latent codes in our LP model, we also investigate its performance in conjunction with different settings of hyperparameters. Two important variables for the VQ are the latent length compression factor c, and size of latent vocabulary K. If c is too small, the latent space becomes too large to search; on the other hard, too large c can mean individual latent tokens cannot encoded the information needed to reconstruct the program. Similarly, we expect that too small of a vocabulary K can limit the expressiveness of the latent space, but too large K can make predicting the correct latent code too difficult. We confirm this in our evaluations in Figure 5(b) and Figure 5(c). 5.3 PYTHON CODE GENERATION Our next test domain is a Python code generation (CG) task, which involves generating code for a function that implements a natural-language specification. The dataset used consists of 111K python examples, which consist of a docstring and corresponding code snippet, collected from Github (Wan et al., 2018). An example docstring and program from the dataset is shown in Figure 6. We used a language-independent tokenizer jointly on data (Kudo & Richardson, 2018), and processed the dataset into a vocabulary of 35K sub-word tokens. Furthermore, following Wei et al. (2019), we set the maximum length of the programs to be 150 tokens resulting in 85K examples. Across all models, we set the embedding size to be 256 and hidden size to be 512, and the attention layers consist of 6 stacked layers with 16 heads each, similar to in neural machine translation (Vaswani et al., 2017). For the LP model, we used a latent compression factor c = 2 and vocabulary size K = 400 after a hyperparameter search. The models are evaluated on 1K held-out examples. We initially found that it was difficult for the program encoder to detect latent sequence structure in the ground-truth programs as is due to the noise in variable names. To remedy this, we used an abstract syntax tree (AST) parser on the ground-truth programs to replace the i-th function argument and variable appearing the program with the token ARG_i and VAR_i, respectively. This was only used in training the program encoder and did not impact evaluation. Method BLEU B = 1 10 100 Base (Wei et al., 2019) 10.4 - - Dual (Wei et al., 2019) 12.1 - - RobustFill [LSTM] 11.4 14.8 16.0 RobustFill [Transformer] 12.1 15.5 17.2 Latent Programmer 14.0 18.6 21.3 Table 3: BLEU score on code generation task. We evaluate performance by computing the best BLEU score among the output beams (Papineni et al., 2002). We computed BLEU as the geometric mean of n-gram matching precision scores up to n = 4. Table 3 shows that our LP model outperforms the baselines. From the results, it can be seen that this is a difficult task, which may be due to the ambiguity in specifying code from a short docstring description. As evidence, we addition- ally include results from a recent work that proposed seq-to-seq CG models on the same data that performed similar to our baselines (Wei et al., 2019). These results show that improvements due to the LP model exist even in difficult CG domains. For example docstrings and code generated by the LP Model, refer to Figure 9 in Appendix B. 6 CONCLUSION In this work we proposed the Latent Programmer (LP), a novel neural program synthesis technique that leverages a structured latent sequences to guide search. The LP model consists of a latent predictor, which maps the input specification to a sequence of discrete latent variables, and a latent program decoder that generates a program token-by-token while attending to the latent sequence. The latent predictor was trained via a self-supervised method in which a discrete autoencoder of programs was learned using a discrete bottleneck, specifically a VQ-VAE (van den Oord et al., 2017), and the latent predictor tries to predict the autoencoded sequence as if it were the ground-truth. During inference, the LP model first searches in latent space for discrete codes, then conditions on those codes to search over programs. Empirically, we showed that the Latent Programmer outperforms state-ofthe-art baselines as Robustfill (Devlin et al., 2017), which ignore latent structure. Exciting future avenues of investigation include achieving better performance by grounding the latent vocabulary and generalizing our method to other tasks in natural language and structured prediction. A EXTENDED DESCRIPTION OF DSL AND ROBUSTFILL MODEL The DSL for string transformations we use is the same as used in RobustFill (Devlin et al., 2017), and is shown in Figure 7. The top-level operator for programs in the DSL is a Concat operator that concatenates a random number (up to 10) of expressions ei. Each expression e can either be a substring expression f , a nesting expression n, or a constant string c. A substring expression can either return the substring between left k1 and right k2 indices, or between the i1-th occurence of regex r1 and i2-th occurence of regex r2. The nesting expressions also return substrings of the input, such as extracting the i-th occurrence of a regex, but can also be composed with existing substring or nesting expressions for more complex string transformations. RobustFill Model RobustFill (Devlin et al., 2017) is a seq-to-seq neural network that uses a encoder-decoder architecture where the encoder computes a representation of the input e(X), and the decoder autoregressively generates the output given the source representation, i.e. conditional likelihood of Y = [y1, . . . , yT ] decomposes as p(Y |X) = ∏T t=1 p(yt|y<t, X). In RobustFill, the probability of decoding each token yt is given by p(yt|y<t, X) = Softmax (W (ht)) with W being the projection onto logits, or unnormalized log probabilities. The hidden representation ht is an LSTM hidden unit given by, Et = Attention (ht−1, e(X)) , ht = LSTM(ht−1, Et) . Here e(X) is the sequence of hidden states after processing the specifications with an LSTM encoder, and Attention (Q,V ) denotes the scaled dot-product attention with query Q and key-value sequence V (Bahdanau et al., 2016). In the case of X being multiple I/O examples, the RobustFill model of Devlin et al. (2017) uses double attention sIt,i = Attention (ht−1, e(Ii)) sOt,i = Attention ( Concat ( ht−1, s I t,i ) , e(Oi) ) ht,i = LSTM ( ht−1,Concat ( sIt,i, s O t,i )) ∀1 ≤ i ≤ N, and hidden states are pooled across examples before being fed into the final softmax layer, or ht = maxpool1≤i≤N tanh(V (ht,i)) , where V is another projection. B EXAMPLES OF GENERATED PROGRAMS AND LATENT CODES
1. What is the focus and contribution of the paper on program synthesis? 2. What are the strengths of the proposed approach, particularly in its novelty and significance? 3. What are the weaknesses of the paper regarding its claims and comparisons with other works? 4. How does the reviewer assess the clarity, quality, originality, and reproducibility of the paper's content? 5. Are there any concerns or questions regarding the interpretation and effectiveness of the proposed method?
Review
Review Summary: This paper proposes a two-level hierarchical program synthesizer, Latent Programmer, which first predicts a sequence of latent codes from given input-output examples, and then decodes the latent codes into a program. The sequence of latent codes can be viewed as a high-level synthesis plan, guiding the subsequent low-level synthesis. Latent Programer significantly outperforms RobustFill on string manipulation tasks and achieves state-of-the-art results on Python code generation tasks. Quality: The paper presents a novel program synthesis idea and the evaluation is promising and convincing. Clarity: The writing provides enough background and explains the main idea in a very clear manner. Originality: The application of Vector Quantized Variational Autoencoder for a two-level hierarchical synthesis is quite novel. Significance: This work shows that a promising hierarchical learning approach for program synthesis. Its effectiveness motivates many future explorations in this direction. Questions: Q1: Why Python Code generation tasks use BLEU as the metric, rather than functional correctness? Q2: Latent codes are motivated as a "high-level plan"? Do you observe certain interpretability of latent codes? Q3: Since the lengths of synthesized programs are different for different tasks, it might be good to have a task-specific length of latent codes. The authors do show that varying the length of latent codes could affect performance. Is the length of latent codes (always) proportional to the length of synthesized programs?
ICLR
Title Latent Programmer: Discrete Latent Codes for Program Synthesis Abstract In many sequence learning tasks, such as program synthesis and document summarization, a key problem is searching over a large space of possible output sequences. We propose to learn representations of the outputs that are specifically meant for search: rich enough to specify the desired output but compact enough to make search more efficient. Discrete latent codes are appealing for this purpose, as they naturally allow sophisticated combinatorial search strategies. The latent codes are learned using a self-supervised learning principle, in which first a discrete autoencoder is trained on the output sequences, and then the resulting latent codes are used as intermediate targets for the end-to-end sequence prediction task. Based on these insights, we introduce the Latent Programmer, a program synthesis method that first predicts a discrete latent code from input/output examples, and then generates the program in the target language. We evaluate the Latent Programmer on two domains: synthesis of string transformation programs, and generation of programs from natural language descriptions. We demonstrate that the discrete latent representation significantly improves synthesis accuracy. 1 INTRODUCTION Our focus in this paper is program synthesis, one of the longstanding grand challenges of artificial intelligence research (Manna & Waldinger, 1971; Summers, 1977). The objective of program synthesis is to automatically write a program given a specification of its intended behavior, such as a natural language description or a small set of input-output examples. Search is an especially difficult challenge within program synthesis (Alur et al., 2013; Gulwani et al., 2017), and many different methods have been explored, including top-down search (Lee et al., 2018), bottom up search (Udupa et al., 2013), beam search (Devlin et al., 2017), and many others (see Section 2). We take a different philosophy: Can we learn a representation of programs specifically to help search? A natural way of representing a program is as a sequence of source code tokens, but the synthesis task requires searching over this representation, which can be difficult for longer, more complex programs. A programmer often starts by specifying high-level components of a program as a plan, then fills in the details of each component i.e. in string editing, a plan could be to extract the first name, then the last initial. We propose to use a sequence of latent variable tokens, called discrete latent codes, to represent such plans. Instead of having a fixed dictionary of codes, we let a model discover and learn what latent codes are useful and how to infer them from specification. Our hypothesis is that a discrete latent code – a sequence of discrete latent variables – can be a useful representation for search (van den Oord et al., 2017; Roy et al., 2018; Kaiser et al., 2018). This is because we can employ standard methods from discrete search, such as beam search, over a compact space of high-level plans and then over programs conditioned on the plan, in a two-level procedure. We posit that the high-level search can help to organize the search over programs. In the string editing example earlier, a model could be confident that it needs to extract the last initial, but is less sure about whether it needs to extract a first name. By changing one token in the latent code, two-level search can explore alternative programs that do different things in the beginning. Whereas in traditional single-level search, the model would need to change multi-token prefixes of the alternatives, which is difficult to achieve in limited budget search. We propose the Latent Programmer, a program synthesis method that uses learned discrete representations to guide search via a two-level synthesis. The Latent Programmer is trained by a self- supervised learning principle. First a discrete autoencoder is trained on a set of programs to learn discrete latent codes, and then an encoder is trained to map the specification of the synthesis task to these latent codes. Finally, at inference time, Latent Programmer uses a two-level search. Given the specification, the model first produces a L-best list of latent codes from the latent predictor, and uses them to synthesize potential programs. On two different program synthesis domains, we find empirically that the Latent Programmer improves synthesis accuracy by over 10% compared to standard sequence-to-sequence baselines as RobustFill (Devlin et al., 2017). We also find that our method improves diversity of predictions, as well as accuracy on long programs. 2 BACKGROUND Problem Setup The goal in program synthesis is to find a program in a given language that is consistent with a specification. Formally, we are given a domain specific language (DSL) which defines a space Y of programs. The task is described by a specification X ∈ X and is solved by some, possibly multiple, unknown program(s) Y ∈ Y . For example, each specification can be a set of input/output (I/O) examples denoted X = {(I1, O1), . . . (IN , ON )}. Then, we say that we have solved specification X if we found a program Y which correctly solves all the examples: Y (Ii) = Oi, ∀i = 1, . . . , N . As another example, each specification can be a natural language description of a task, and the corresponding program implements said task. An example string transformation synthesis task with four I/O examples together with a potential correct program in the string transformation DSL is shown in Figure 1. Vector Quantization Traditionally, neural program synthesis techniques process the input specification as a set of sequences and predicts the output program token-by-token (Devlin et al., 2017). In this work, we present a new approach for synthesis that performs structured planning in latent space using a discrete code. We conjecture that programs have an underlying discrete structure; specifically, programs are compositional and modular with components that get reused across different problems. Our approach leverages this structure to guide the search over large program spaces. Following works in computer vision (van den Oord et al., 2017; Roy et al., 2018), we discover such discrete structure by using a Vector Quantized Variational Autoencoder (VQ-VAE). VQ-VAEs work by feeding the intermediate representation of an autoencoder through a discretization bottleneck (van den Oord et al., 2017). For completeness, we provide background on VQ-VAEs below. In a VQ-VAE, latent codes are drawn from a discrete set of learned vectors c ∈ RK×D, or codebook. Each element in the codebook can be viewed as either a token with id k ∈ [K] or as an embedding ck ∈ RD. To generate the discrete codes, the continuous autoencoder output e is quantized via nearest-neighbor lookup into the codebook. Formally, the token id qk(e) and quantized embedding qc(e) are defined as qc(e) = cqk(e) where qk(e) = arg min k∈[K] ||e− ck||2. (1) For input x, the training loss for a VQ-VAE consists of: a reconstruction loss for the encoder-decoder weights, a codebook loss that encourages codebook embeddings to be close to the continuous vectors which are quantized to them, and a commitment loss that encourages the encoded input ec(x) to "commit" to codes i.e. not switch which discrete code it is quantized to. The loss is given by, L(c, θ, φ) = log pθ (x | qc(ecφ(x))) + ||sg(ecφ(x))− c)||22 + β||sg(c)− ecφ(x)||22, (2) where θ, φ are the parameters of the decoder and encoder, respectively, sg(·) is the stop gradient operator that fixes the operand from being updated by gradients, and β controls the strength of the commitment loss. To stabilize training, van den Oord et al. (2017) also proposed removing the codebook loss and set the codebook to an exponential moving average (EMA) of encoded inputs. 3 SYNTHESIS WITH DISCRETE LATENT VARIABLES We propose a two-level hierarchical approach to program synthesis that first performs high-level planning over an intermediate sequence, which is then used for fine-grained generation of the program. In our approach, a top-level module first infers a latent code, which gets used by a low-level module to generate the final program. 3.1 HIERARCHY OF TWO TRANSFORMERS Our proposed Latent Programmer (LP) architecture consists of two Transformers in a two-level structure. The architecture comprises of two modules: a latent predictor which produces a latent code, which can be interpreted as a course sketch of the program, and a latent program decoder, which generates a program conditioned on the code. The latent code consists of discrete latent variables as tokens, which we arbitrarily denote TOK_1,..., TOK_K, whose meanings are assigned during training. Both components use a Transformer architecture due to their impressive performance on natural language tasks (Vaswani et al., 2017). To help the model assign useful meanings to the latents, we also leverage a program encoder, which is only used during training. The program encoder ec(Y ) encodes the true program Y = [y1, y2, . . . , yT ] into a shorter sequence of discrete latent variables Z = [z1, z2, . . . , zS ], represented as codebook entries; that is, each zi ∈ RD is one of K entries in a codebook c. The latent sequence serves as the ground-truth high-level plan for the task. The function ec(Y ) is a Transformer encoder, followed by a stack of convolutions of stride 2, each halving the size of the sequence. We apply the convolution ` times, which reduces a T -length program to a latent sequence of length dT/2`e. This provides temporal abstraction, since the high-level planning actions are made only every 2` steps. In summary, the program encoder is given by ec(Y )← h`; hm ← Conv(hm−1) for m ∈ 1 . . . `; h0 ← TransformerEncoder(Y ). (3) Here TransformerEncoder(·) applies a stack of self-attention and feed-forward units on input embeddings via a residual path, described in detail by Vaswani et al. (2017). This will be used, along with the latent program decoder, as an autoencoder during training (see Section 3.2). The latent predictor lp(X) autoregressively predicts a coarse latent code lp(X) ∈ RS×K , conditioned on the program specification X . The latent predictor outputs a sequence of probabilities, which can be decoded using search algorithms such as beam search to generate a predicted latent code Z ′. This is different than the program encoder, which outputs a single sequence Z, because we use the latent predictor to organize search over latent codes; at test time, we will obtain a L-best list of latent token sequences from lp(X). The latent predictor is given by a stack of Transformer blocks with the specification X as inputs. Similarly, the latent program decoder d(Z,X) defines an autoregressive distribution over program tokens given the specification X and the coarse plan Z ∈ RS×K , represented as codebook entries. The decoder is a Transformer that jointly attends to the latent sequence and program specification. This is performed via two separate attention modules, whose outputs are concatenated into the hidden unit. Formally, given a partially generated program Y ′ = [y′1, y ′ 2, . . . , y ′ t−1], and the encoded specification E = TransformerEncoder(X), the latent program decoder performs ht = Concat (TransformerDecoder(Y ′, E)t−1,TransformerDecoder(Y ′, Z)t−1) , (4) where TransformerDecoder(x, y) denotes a Transformer decoder applied to outputs y while attending to inputs encoding x, and the subscript indexes an entry in the resulting output sequence. Finally, the distribution over output token k is given by dt(Z,X) = Softmax (W (ht)) , where W is a learned parameter matrix. Finally, the latent program decoder defines a distribution over programs autoregressively as p(Y |Z,X) = ∏ t p(yt|y<t, Z,X), where p(yt|y<t, Z,X) = dt(Z,X). When X is multiple I/O examples, each example is encoded asEi = TransformerDecoder(Ii, Oi). Then, a separate hidden state per I/O is computed following equation 4, followed by a late max-pool to get the final hidden state. Note that the program encoder and latent program decoder make up a VQ-VAE model of programs, with additional conditioning on the specification. The complete LP architecture is summarized in Figure 2, and an end-to-end example run of our architecture is shown in Figure 4. 3.2 TRAINING Our LP performs program synthesis using a two-level search, first over latent sequences then over programs. Given program specification, we want to train our latent predictor to produce an informative latent sequence from which our latent program decoder can accurately predict the true program. Our training loss for the LP model consists of three supervised objectives. The autoencoder loss ensures that the latent codes contain information about the program. It is a summation of the reconstruction loss between the autoencoder output d(qc(Y ), X) and true program Y , as well as a commitment loss to train the encoder output ec(Y ) to be close to codebook c. Like in Roy et al. (2018), codebook is not trained but set to the EMA of encoder outputs. This loss is similar to the loss function of a VQ-VAE as in equation 2, but also depends on specification X . This objective trains the latent tokens in the codebook so that they correspond to informative high-level actions, as well as make sure our latent program decoder can accurately recover true program given the specification and a plan comprising of such actions. The latent prediction loss ensures that latent codes can be predicted from specifications. It is a reconstruction loss between the distribution over latents predicted from the specification lp(X) and the autoencoded latents qk(ec(Y )) from the ground-truth program. This is a self-supervised approach that treats the autoencoded latent sequence as the ground-truth high-level plan, and trains the latent predictor to generate the plan using just the program specificationX . Note that the program encoder is only used in training, as at test time ec(Y ) is unknown, so the LP model uses lp(X) instead. Finally, the end-to-end loss ensures that programs can be predicted from specifications. This is especially important because in the reconstruction loss, the latent program decoder receives as input latent codes from the autoencoded latent sequences ec(Y ), whereas at test time, the decoder receives a latent code from the latent predictor lp(X). This can result in mistakes in the generated program since the decoder has never been exposed to noisy results from the latent predictor. The end-to-end loss alleviates this issue. The end-to-end loss is probability of the correct program Y when predicted from a soft-quantized latent code, given by lp(X)T c. This has the added benefit of allowing gradient to flow through the latent predictor, training it in an end-to-end way. In summary, the full loss for a training instance is L(c, θ, φ, ψ) = log pθ (Y | qc(ecφ(Y )), X) + β||sg(c)− ecφ(Y )||22︸ ︷︷ ︸ autoencoder + log p ( qk(ecφ(Y )) | lpψ(X) )︸ ︷︷ ︸ latent prediction + log pθ ( Y | lpψ(X)T c,X )︸ ︷︷ ︸ end-to-end (5) where we explicitly list out θ, φ, and ψ representing the parameters of the latent program decoder, program encoder, and latent decoder respectively. Furthermore, for the first 10K steps of training, we give embeddings of the ground-truth program Y , averaged over every 2` tokens, as the latent sequence instead of ec(Y ). This pre-training ensures that initially, the latent code carries some information about the program so that the attention to the code has reasonable gradients that can then to propagated to the program encoder afterward pre-training. Doing this was empirically shown to prevent the bypassing phenomenon where the latent code is ignored during decoding (Bahuleyan et al., 2017). 3.3 INFERENCE During inference, we use a multi-level variant of beam search to decode the output probabilities of our LP model. Standard beam search with beamB will generate the top-B most likely programs according to the model, and find the first one (if any) that is consistent with the specification (Parisotto et al., 2017; Devlin et al., 2017). In our case, we first perform beam search for L latent beams, then for bB/Lc programs per latent sequence. Note that during inference, the latent predictor will continue to generate latent tokens until an end-of-sequence token is produced. This means that the generated latent sequence does not necessarily satisfy having length dT/2`e as during training; however, we found the latent sequence lengths during training and evaluation to be close in practice. Setting L = B allows for the maximum exploration of the latent space, while setting L = 1 reduces our method to standard beam search, or exploitation of the most likely latent decoding. We choose L = √ B in our experiments, but explore the effect of various choices of L in Section 5.2. 4 RELATED WORK Program Synthesis Our work deals with program synthesis, which involves combinatorial search for programs that match a specification. Many different search methods have been explored within program synthesis, including search within a version-space algebra (Gulwani, 2011), bottom-up enumerative search (Udupa et al., 2013), stochastic search (Schkufza et al., 2013), genetic programming (Koza, 1994), or reducing the synthesis problem to logical satisfiability (Solar-Lezama et al., 2006). Neural program synthesis involves learning neural networks to predict function distributions to guide a synthesizer (Balog et al., 2017), or the program autoregressively in an end-to-end fashion (Parisotto et al., 2017; Devlin et al., 2017). SketchAdapt (Nye et al., 2019) combined these approaches by first generating a program sketch with holes, and then filling holes using a conventional synthesizer. Related to our work, DreamCoder (Ellis et al., 2020) iteratively builds a sketches using progressively more complicated primitives though a wake-sleep algorithm. Our work is closely related in spirit but fundamentally differs in two ways: (1) our sketches are comprised of a general latent vocabulary that is learned in a simple, self-supervised fashion, and (2) our method avoids enumerative search, which is prohibitively expensive for large program spaces. There is also a line of work that deals with learning to process partial programs in addition to the specification. In execution-guided program synthesis, the model guides iterative extensions of the partial programs until a matching one is found (Zohar & Wolf, 2018; Chen et al., 2019; Ellis et al., 2019). Balog et al. (2020) of late proposed a differentiable fixer that is trained to iteratively edit incorrect programs. We treat these works as complementary, and can be combined with ours to refine predictions. Discrete Latent Bottlenecks Variational autoencoders (VAE) were first introduced using continuous latent representations (Kingma & Welling, 2014; Rezende et al., 2014). Several promising approaches were proposed to use discrete bottlenecks instead, such as continuous relaxations of categorical distributions i.e. the Gumbel-Softmax reparametrization trick (Jang et al., 2017; Maddison et al., 2017). Recently, VQ-VAEs using nearest-neighbor search on a learned codebook (see Section 2 for more details) achieved impressive results almost matching continuous VAEs (van den Oord et al., 2017; Roy et al., 2018). Discrete bottlenecks have also been used for sentence compression (Miao & Blunsom, 2016) and text generation (Puduppully et al., 2019), but these works does not learn the semantics of the latent codes, like ours does. Within the domain of synthesis of chemical molecules, Gómez-Bombarelli et al. (2018) have applied Bayesian optimization within a continuous latent space to guide this structured prediction problem. Learning to search has also been considered in the structured prediction literature (Daumé et al., 2009; Chang et al., 2015; Ross et al., 2011), but to our knowledge, these works do not consider the problem of learning a discrete representation for search. Notably, VQ-VAE methods have been successfully used to encode natural language into discrete codes for faster decoding in machine translation (Kaiser et al., 2018). Our work similarly uses a VQ-VAE to learn a discrete code, but we use the learned code in a two-level search that improves accuracy. To do so, we propose a model that is autoregressive on both the latent and program space, and perform two-level beam search on latent codes and programs. The key novelty behind our work is that first searching over a learned discrete latent space can assist search over the complex program space; using a VQ-VAE as Kaiser et al. (2018) did enables us to do so. 5 EXPERIMENTS We now present the results of evaluating our Latent Programmer model in two test domains: synthesis of string transformation programs from examples and code generation from natural language descriptions. We compare our LP model against several strong baselines. RobustFill [LSTM] is a seq-to-seq LSTM with attention on the input specification, and trained to autoregressively predict the true program. The architecture is comparable to the RobustFill model designed originally for the string transformation tasks in our first domain (Devlin et al., 2017), but easily generalizes to all program synthesis domains. We detail the architecture in Appendix A. RobustFill [Transformer] alternatively uses a Transformer architecture, equivalent in architecture to the latent planner in our LP model, also trained to autoregressively predict the program. Transformers were found to perform much better than LSTMs in language tasks because they process the entire input as a whole, and have no risk of forgetting past dependencies (Vaswani et al., 2017). This baseline can be also be considered of an ablation of our LP model without any latent codes. The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels. We introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE. In both cases the latent space is continuous, and well-known combinatorial search algorithms such as beam search cannot search over the space. Latent RobustFill [AE] replaces the VQ-VAE component of our LP model with a generic autoencoder. This makes the latent code a sequence of continuous embeddings. The latent prediction loss in equation 5 is simply replaced by a squared error between the output of the autoencoder and the latent predictor. Performing beam search over the continuous latent space is intractable, so during inference we generate only one latent sequence per task; this is equivalent to two-level beam search described earlier with L = 1. In addition, because we cannot define an end-of-sequence token in the latent space, this baseline must be given knowledge of the true program length even during inference, and always generates a latent sequence of length dT/2`e. Latent RobustFill [VAE] substitutes the VQ-VAE component with a VAE (Kingma & Welling, 2014). This again produces a continuous latent space, but regularized to be distributed approximately as a standard Gaussian. Performing beam search is still intractable, but we can sample L latent sequences from the Gaussians determined by the VAE, and perform beam search on the programs afterwards. Again, we assume that the true program length is known during inference. 5.1 STRING TRANSFORMATION The first test domain is a string transformation DSL frequently studied in the program synthesis literature (Parisotto et al., 2017; Devlin et al., 2017; Balog et al., 2020). Tasks in this domain involve finding a program which maps a set of input strings to a corresponding set of outputs. Programs in the DSL are a concatenation of expressions that perform regex-based string transformations (see Appendix A for the full DSL). We perform experiments on a synthetic dataset generated by sampling programs from the DSL, then the corresponding I/O examples using an heuristic similar to the one used in NSPS (Parisotto et al., 2017) and RobustFill (Devlin et al., 2017) to ensure nonempty output for each input. We consider programs comprising of a concatenation of up to 10 expressions and limit the lengths of strings in the I/O to be at most 100 characters. All models have an embedding size of 128 and hidden size of 512, and the attention layers consist of 3 stacked layers with 4 heads each. For the LP model, we used a latent compression factor ` = 2 and vocabulary size K = 40. The models are trained on roughly 25M tasks, and evaluated on 1K held-out ones. In Table 1, we report the accuracy–the number of time a program was found conforming to the I/O examples–of our method against the baselines. Across all beam sizes, our LP model performed 5-7 percentage points better (over 10% of baseline accuracy) than the next best model. From our ablative study, we see that having two-level using discrete latent codes was important, as the baselines over continuous latent spaces performed comparably to the traditional RobustFill model. Recently, SketchAdapt also proposed two-level search (Nye et al., 2019), but in the top-level, it performs beam search over program space augmented with a HOLE token. In constrast, our method searches over a learned, general latent space. During low-level search, SketchAdapt enumerates partial programs to co-opt the HOLE tokens using a learned syn- thesizer similar to DeepCoder (Balog et al., 2017), whereas we again perform beam search. To compare the two, we evaluate our LP model on samples generated according to Nye et al. (2019), which slightly modifies the DSL to increase the performance of synthesizers, and report results in Table 2. Since enumeration can be done more quickly than beam search, we let SketchAdapt synthesize 3, 000 programs using B top-level beams, whereas our LP model can only generate B programs. Our LP model is able to outperform SketchAdapt even in the modified DSL. 5.2 ANALYSIS We conduct extensive analysis to better understand our LP model in terms of learning, the ability to generate long programs, and diversity in the beams. All results are reported with beam size B = 10. Model Size Our LP model uses an additional latent code for decoding, which introduces additional parameters into the model than the baseline RobustFill model. To make a fair comparison, we vary the embedding and hidden dimension of all of our evaluated methods, and compare the effect of the number of trainable parameters on the accuracy. Figure 3(a) shows that all methods respond well to an increase in model size. Nevertheless, we see that even when normalized for size, our LP model outperforms baselines by a significant margin. Program Length Prior work has shown that program length is a reasonable proxy measure of problem difficulty. We hypothesize that using latent codes is most beneficial when generating long programs. Figure 3(b) shows how ground-truth program length affects the accuracy of our LP model compared to RobustFill, which lacks latent codes. As expected, accuracy decreases with problem complexity. Perhaps surprisingly, though, we see a large improvement in our LP model’s ability to handle more complex problems. In Figure 4, we also show an illustrative example in the domain where our LP model found a valid program whereas the RobustFill model did not. In this example, the ground-truth program was long but had a repetitive underlying structure. Our LP model correctly detected this structure, as evidenced by the predicted latent sequence. We show additional examples LP GetAll_NUMBER | Const(:) | GetToken_ALL_CAPS_1 | Const(.) | GetToken_ALL_CAPS_2 | Const(.) | GetToken_ALL_CAPS_-1 | Const(.) in Figure 9 of Appendix B. It is important to note that our method allows tokens in the discrete latent code to have arbitrary meaning, yielding rich and expressive latent representations. However, the trade-off is that because the latent codes were not grounded, it is difficult to objectively interpret the latent codes. Grounding the latent space to induce interpretability is an avenue for future work. Latent Beam Size In multi-level beam search of beam size B, first L latent beams are decoded, then bB/Lc programs per latent sequence. The latent beam size L controls how much search is performed over latent space. We theorize that higher L will produce more diverse beams; however, too high L can be harmful in missing programs with high joint log-probability. We show the effect of latent beam size on both the beam-10 accuracy and a proxy measure for diversity. Following prior work, we measure diversity by counting the number of distinct n-grams in the beams, normalized by the total number of tokens to bias against long programs (Vijayakumar et al., 2018). We report the results varying L for B = 10 in Figure 5(a). As expected, increasing the latent beam size L improves diversity of output programs, but excessively large L harms the final accuracy. An important observation is that the L = 1 case, where one latent code is used to decode all programs, performs similarly to baseline RobustFill. In this extreme, no search is performed over the latent space, and our proposed two-level search reduces to only searching over programs; this is further evidence that explicitly having two-level search is critical to the LP model’s improved performance. Latent Length and Vocabulary Size Since the discretization bottleneck is a critical component in generating latent codes in our LP model, we also investigate its performance in conjunction with different settings of hyperparameters. Two important variables for the VQ are the latent length compression factor c, and size of latent vocabulary K. If c is too small, the latent space becomes too large to search; on the other hard, too large c can mean individual latent tokens cannot encoded the information needed to reconstruct the program. Similarly, we expect that too small of a vocabulary K can limit the expressiveness of the latent space, but too large K can make predicting the correct latent code too difficult. We confirm this in our evaluations in Figure 5(b) and Figure 5(c). 5.3 PYTHON CODE GENERATION Our next test domain is a Python code generation (CG) task, which involves generating code for a function that implements a natural-language specification. The dataset used consists of 111K python examples, which consist of a docstring and corresponding code snippet, collected from Github (Wan et al., 2018). An example docstring and program from the dataset is shown in Figure 6. We used a language-independent tokenizer jointly on data (Kudo & Richardson, 2018), and processed the dataset into a vocabulary of 35K sub-word tokens. Furthermore, following Wei et al. (2019), we set the maximum length of the programs to be 150 tokens resulting in 85K examples. Across all models, we set the embedding size to be 256 and hidden size to be 512, and the attention layers consist of 6 stacked layers with 16 heads each, similar to in neural machine translation (Vaswani et al., 2017). For the LP model, we used a latent compression factor c = 2 and vocabulary size K = 400 after a hyperparameter search. The models are evaluated on 1K held-out examples. We initially found that it was difficult for the program encoder to detect latent sequence structure in the ground-truth programs as is due to the noise in variable names. To remedy this, we used an abstract syntax tree (AST) parser on the ground-truth programs to replace the i-th function argument and variable appearing the program with the token ARG_i and VAR_i, respectively. This was only used in training the program encoder and did not impact evaluation. Method BLEU B = 1 10 100 Base (Wei et al., 2019) 10.4 - - Dual (Wei et al., 2019) 12.1 - - RobustFill [LSTM] 11.4 14.8 16.0 RobustFill [Transformer] 12.1 15.5 17.2 Latent Programmer 14.0 18.6 21.3 Table 3: BLEU score on code generation task. We evaluate performance by computing the best BLEU score among the output beams (Papineni et al., 2002). We computed BLEU as the geometric mean of n-gram matching precision scores up to n = 4. Table 3 shows that our LP model outperforms the baselines. From the results, it can be seen that this is a difficult task, which may be due to the ambiguity in specifying code from a short docstring description. As evidence, we addition- ally include results from a recent work that proposed seq-to-seq CG models on the same data that performed similar to our baselines (Wei et al., 2019). These results show that improvements due to the LP model exist even in difficult CG domains. For example docstrings and code generated by the LP Model, refer to Figure 9 in Appendix B. 6 CONCLUSION In this work we proposed the Latent Programmer (LP), a novel neural program synthesis technique that leverages a structured latent sequences to guide search. The LP model consists of a latent predictor, which maps the input specification to a sequence of discrete latent variables, and a latent program decoder that generates a program token-by-token while attending to the latent sequence. The latent predictor was trained via a self-supervised method in which a discrete autoencoder of programs was learned using a discrete bottleneck, specifically a VQ-VAE (van den Oord et al., 2017), and the latent predictor tries to predict the autoencoded sequence as if it were the ground-truth. During inference, the LP model first searches in latent space for discrete codes, then conditions on those codes to search over programs. Empirically, we showed that the Latent Programmer outperforms state-ofthe-art baselines as Robustfill (Devlin et al., 2017), which ignore latent structure. Exciting future avenues of investigation include achieving better performance by grounding the latent vocabulary and generalizing our method to other tasks in natural language and structured prediction. A EXTENDED DESCRIPTION OF DSL AND ROBUSTFILL MODEL The DSL for string transformations we use is the same as used in RobustFill (Devlin et al., 2017), and is shown in Figure 7. The top-level operator for programs in the DSL is a Concat operator that concatenates a random number (up to 10) of expressions ei. Each expression e can either be a substring expression f , a nesting expression n, or a constant string c. A substring expression can either return the substring between left k1 and right k2 indices, or between the i1-th occurence of regex r1 and i2-th occurence of regex r2. The nesting expressions also return substrings of the input, such as extracting the i-th occurrence of a regex, but can also be composed with existing substring or nesting expressions for more complex string transformations. RobustFill Model RobustFill (Devlin et al., 2017) is a seq-to-seq neural network that uses a encoder-decoder architecture where the encoder computes a representation of the input e(X), and the decoder autoregressively generates the output given the source representation, i.e. conditional likelihood of Y = [y1, . . . , yT ] decomposes as p(Y |X) = ∏T t=1 p(yt|y<t, X). In RobustFill, the probability of decoding each token yt is given by p(yt|y<t, X) = Softmax (W (ht)) with W being the projection onto logits, or unnormalized log probabilities. The hidden representation ht is an LSTM hidden unit given by, Et = Attention (ht−1, e(X)) , ht = LSTM(ht−1, Et) . Here e(X) is the sequence of hidden states after processing the specifications with an LSTM encoder, and Attention (Q,V ) denotes the scaled dot-product attention with query Q and key-value sequence V (Bahdanau et al., 2016). In the case of X being multiple I/O examples, the RobustFill model of Devlin et al. (2017) uses double attention sIt,i = Attention (ht−1, e(Ii)) sOt,i = Attention ( Concat ( ht−1, s I t,i ) , e(Oi) ) ht,i = LSTM ( ht−1,Concat ( sIt,i, s O t,i )) ∀1 ≤ i ≤ N, and hidden states are pooled across examples before being fed into the final softmax layer, or ht = maxpool1≤i≤N tanh(V (ht,i)) , where V is another projection. B EXAMPLES OF GENERATED PROGRAMS AND LATENT CODES
1. What is the main contribution of the paper regarding program synthesis? 2. What are the strengths and weaknesses of the proposed VQ-VAE approach? 3. How does the reviewer assess the novelty and significance of the paper's contributions? 4. Are there any concerns regarding the nature of the VQ-VAE model and its relationship with previous works? 5. How does the reviewer evaluate the effectiveness of the experimental results and comparisons with baseline systems? 6. Are there any questions or suggestions for improving the paper's content, such as providing more quantitative analysis on the learned code or including more diverse experiments?
Review
Review This paper proposes a VQ-VAE approach for program synthesis (generating a program from specifications, either input-output pairs or natural language description). Generally speaking, a VQ-VAE learns an autoregressive discrete latent code in addition to traditional Seq2Seq learning, and perform beam search on both latent code and output programs. Experimental results show that the model outperforms three baseline systems on two tasks. I have major concerns on the nature of the VQ-VAE model. First of all, this paper heavily relies on Kaiser et al. (ICML'2018), except that the decoder of this paper is autoregressive and that this paper proposes beam search on the latent sequential discrete codes. However, this paper has very light citation on Kaiser et al. (2018). The authors should be more honest about previous work and make direct comparison on the difference. What is taken from previous work? What is an adaptation? What is an extension? The current writing shows that this paper has a heavy development on the model, when in fact, it's mostly taken from previous work. The author claims that VQ-VAE serves as a discrete bottleneck. However, I strongly disagree with this. The decoder in this paper is well aware of the input by "TransformerDecoder(Y', E)" in Eq 4, where the E = TransformerEncoder(X). So, the decoder can just learn from the input X and disregard the VQ-VAE space, despite a few semantic losses imposed on the latent code (latent prediction and end-to-end in Eq 5). Since the VQ-VAE latent space is in addition to a traditional Seq2Seq training, it cannot serve as a bottleneck/regularization. This is known as the "bypassing phenomenon" in previous work: Bahuleyan et al., Variational attention for sequence-to-sequence models, 2018. The authors may want to explain why their VQ-VAE would not suffer from the bypassing phenomenon. Note: the bypassing phenomenon is actually different in Kaiser et al. (2018). Their decoder is non-autoregressive, so their sequential discrete latent space can provide autoregressive information. But in this work, the decoder is autoregressive, which can simply learn from X directly. What's the real benefit of modeling the discrete latent codes by VQ-VAE? A natural way of having real discrete bottleneck is by reinforcement learning. There lacks comparison and discussion. Note: we all know RL systems are difficult to learn, but the auxiliary losses is Eq 5 can all applied to RL, too. You may also do pre-training or relaxations for RL. The latent codes are generated in an autoregressive fashion. During training, the number of latent codes is ceiling(T/(2^l)). But how do you know the number of codes during test? Did you include an EOS token for such autoregressive generation? If yes, how easy is it to learn the precise semantics of EOS without direct supervision signal? There is no quantitative analysis on the learned code. While there is an example, it is inadequate. We have no measure on how typical is the shown example. I also have major concerns on experiments. The evaluation metrics are peculiar. For example, distance n-grams are used to evaluate diversity. We understand diversity is important for natural language generation, but why do we need diversity for program generation? The BLEU is computed by the best BLEU score among the output beams, but increasing the beam size may not improve top-beam performance. Baseline models are inadequate. For the example-to-program generation, the authors only compared with Seq2Seq with LSTM or Transformer. There has been other efforts on search-based program synthesis, for example, Balog et al. (2017, 2020). I'd like to see a comparison, and what's the further improvement when they are combined (as claimed by this paper)? In code generation from description, the authors only compared with two models in Wei et al., 2019 and two variants of Seq2Seq. But there could be more benchmarked datasets, like Hearthstone, spider, and other semantic parsing datasets in the old days. Minor: The exponential moving average is not proposed in Roy et al. (2018). It's proposed in Appendix A of the original VQ-VAE paper. In short, the discrete latent space beam search appears to be some interesting idea. But I have concerns on the soundness of this paper. It is also noted that there's no code or output available.
ICLR
Title Attention Based Joint Learning for Supervised Electrocardiogram Arrhythmia Differentiation with Unsupervised Abnormal Beat Segmentation Abstract Deep learning has shown great promise in arrhythmia classification in electrocardiogram (ECG). Existing works, when classifying an ECG segment with multiple beats, do not identify the locations of the anomalies, which reduces clinical interpretability. On the other hand, segmenting abnormal beats by deep learning usually requires annotation for a large number of regular and irregular beats, which can be laborious, sometimes even challenging, with strong inter-observer variability between experts. In this work, we propose a method capable of not only differentiating arrhythmia but also segmenting the associated abnormal beats in the ECG segment. The only annotation used in the training is the type of abnormal beats and no segmentation labels are needed. Imitating human’s perception of an ECG signal, the framework consists of a segmenter and classifier. The segmenter outputs an attention map, which aims to highlight the abnormal sections in the ECG by element-wise modulation. Afterwards, the signals are sent to a classifier for arrhythmia differentiation. Though the training data is only labeled to supervise the classifier, the segmenter and the classifier are trained in an end-to-end manner so that optimizing classification performance also adjusts how the abnormal beats are segmented. Validation of our method is conducted on two dataset. We observe that involving the unsupervised segmentation in fact boosts the classification performance. Meanwhile, a grade study performed by experts suggests that the segmenter also achieves satisfactory quality in identifying abnormal beats, which significantly enhances the interpretability of the classification results. 1 INTRODUCTION Arrhythmia in electrocardiogram (ECG) is a reflection of heart conduction abnormality and occurs randomly among normal beats. Deep learning based methods have demonstrated strong power in classifying different types of arrhythmia. There are plenty of works on classifying a single beat, involving convolutional neural networks (CNN) (Acharya et al., 2017b; Zubair et al., 2016), long short-term memory (LSTM) (Yildirim, 2018), and generative adversarial networks (GAN) (Golany & Radinsky, 2019). For these methods to work in clinical setting, however, a good segmenter is needed to accurately extract a single beat from an ECG segment, which may be hard when abnormal beats are present. Alternatively, other works (Acharya et al., 2017a; Hannun et al., 2019) try to directly identify the genres of arrhythmia present in an ECG segment. The limitation of these works is that they work as a black-box and fail to provide cardiologists with any clue on how the prediction is made such as the location of the associated abnormal beats. In terms of ECG segmentation, there are different tasks such as segmenting ECG records into beats or into P wave, QRS complexity, and T wave. On one hand, some existing works take advantage of signal processing techniques to locate some fiducial points of PQRST complex so that the ECG signals can be divided. For example, Pan-Tompkins algorithm (Pan & Tompkins, 1985) uses a combination of filters, squaring, and moving window integration to detect QRS complexity. The shortcomings of these methods are that handcraft selection of filter parameters and threshold is needed. More importantly, they are unable to distinguish abnormal heartbeats from normal ones. To address these issues, Moskalenko et al. (2019); Oh et al. (2019) deploy CNNs for automatic beat segmentation. However, the quality of these methods highly depends on the labels for fiducial points of ECG signals, the annotation process of which can be laborious and sometimes very hard. Besides, due to the high morphological variation of arrhythmia, strong variations exist even between annotations from experienced cardiologists. As such, unsupervised learning based approaches might be a better choice. Inspired by human’s perception of ECG signals, our proposed framework firstly locates the abnormal beats in an ECG segment in the form of attention map and then does abnormal beats classification by focusing on these abnormal beats. Thus, the framework not only differentiates arrhythmia types but also identifies the location of the associated abnormal beats for better interpretability of the result. It is worth noting that, in our workflow, we only make use of annotation for the type of abnormality in each ECG segment without abnormal beat localization information during training, given the difficulty and tedious effort in obtaining the latter. We validate our methods on two datasets from different sources. The first one contains 508 12-lead ECG records of Premature Ventricular Contraction patients, which are categorized into different classes by the origin of premature contraction (e.g., left ventricle (LV) or right ventricle (RV)). For the other dataset, we process signals in the MIT-BIH Arrhythmia dataset into segments of standard length. This dataset includes various types of abnormal beats, and we select 2627 segments with PVC present and 356 segemnts with Atrial Premature Beat (APB) present. Experiments on both two dataset show quantitative evidence that introducing the segmentation of abnormal beats through an attention map, although unsupervised, can in fact benefit the arrhythmia classification performance as measured by accuracy, sensitivity, specificity, and area under Receiver Operating Characteristic (ROC) curve. At the same time, a grade study by experts qualitatively demonstrates our method’s promising capability to segment abnormal beats among normal ones, which can provide useful insight into the classification result. Our code and dataset, which is the first for the challenging PVC differentiation problem, will be released to the public. 2 RELATED WORKS Multitask learning There are many works devoted to training one deep learning models for multitasks rather than one specific task, like simultaneous segmentation and classification. (Yang et al., 2017) solves skin lesion segmentation and classification at the same time by utilizing similarities and differences across tasks. In the area of ECG signals, (Oh et al., 2019) modifies UNet to output the localization of r peaks and arrhythmia prediction simultaneously. What those two works have in common is that different tasks share certain layers in feature extraction. In contrast, our segmenter and classifier are independent models and there is no layer sharing between them. As can be seen in Figure 1, we use attention maps as a bridge connecting the two models. (Mehta et al., 2018) segments different types of issues in breast biopsy images with a UNet and apply a discriminative map generated by a subbranch of the UNet to the segmentation result as input to a MLP for diagnosis. However, their segmentation and classification tasks are not trained end-to-end. (Zhou et al., 2019) proposes a method for collaborative learning of disease grading and lesion segmentation. They first perform a traditional semantic segmentation task with a small portion of annotated labels, and then they jointly train the segmenter and classifier for fine-tuning with an attention mechanism, which is applied on the latent features in the classification model, different from our method. Another difference is that for most existing multitask learning works, labels for each task are necessary, i.e., all tasks are supervised. Our method, on the other hand, only requires the labels of one task (classification), leading to a joint supervised/unsupervised scheme. Attention mechanism After firstly proposed for machine translation (Bahdanau et al., 2014), attention model became a prevalent concept in deep learning and leads to improved performance in various tasks in natural language processing and computer visions. (Vaswani et al., 2017) exploits self-attention in their encoder-decoder architecture to draw dependency between input and output sentences. (Wang et al., 2017) builds a very deep network with attention modules which generates attention-aware features for image classification and (Oktay et al., 2018) integrates attention gates into U-Net (Ronneberger et al., 2015) to highlight latent channels informative for segmentation task. When it comes to ECG, (Hong et al., 2019) proposes a multilevel knowledge guided attention network to discriminate Atrial fibrillation (AF) patients, making the learned models explainable at beat level, rhythm level, and frequency level, which is highly related to our work. Our method and theirs however are quite different in the way attention weights are derived and applied, as well as the output of attention network. First, in that work, the attention weights are obtained from the outputs of hidden layers, while ours are directly from the input. Second, domain knowledge about AF is needed to help the attention extraction, so the process is weakly supervised, while ours do not use any external information and is fully unsupervised. Third, their attention weights are applied to latent features in that work while ours are applied to the input for better interpretability. Finally, in that work, the input ECG segment is divided into equal-length segments in advance and the attention network output only indicates which segment contains the target arrhythmia. The quality highly depends on how the segment is divided, and it does not provide the exact locations of abnormal beats. On the other hand, our method directly locates the abnormal beats on the entire input ECG, offering potentially better interpretability and robustness. 3 METHOD 3.1 OVERVIEW OF THE FRAMEWORK Here we briefly introduce the workflow of our joint learning frameworks for supervised classification and unsupervised segmentation. Firstly, in this work, we choose to model the input signal as a one-dimensional signal D ∈ RM×N , where M is the number of leads and N is the length of the input ECG segment (number of samples over time). We then use a one-dimensional (1D) fully convolutional network called segmenter S to output a feature map L = S(D) ∈ RM×N , After that, we apply a pooling layer to generate window-style element-wise attention A ∈ RM×N , containing weights directly for every sample in the input ECG. The after-attention signalX = A D ∈ RM×N , where represents element-wise production, is then fed into a multi-layer CNN called classifier C, in which the outermost fully connected layer gives the prediction of the arrhythmia types. After training, the abnormal areas are highlighted in X , thus achieving the goal of segmenting abnormal beats from normal ones. Moreover, x, which indicates those beats that are highly associated with the differentiation task, also serves as an explanation for C’s decision. The architecture of our framework is illustrated in Fig 1. 3.2 SEGMENTER AND CLASSIFIER In most existing works, the attention map is fused with the deep features in a neural network. However, for our specific purposes of enhancing intepretability of the classification results as well as unsupervised segmentation, the best result would be obtained by directly applying it to the input signal D. In order to generate attention weights of the same length as D, we choose to utilize UNet (Ronneberger et al., 2015), a fully convolutional network highlighted by the skip connection on different stages. Encoding path extract features recursively and decoding path reconstruct the data as instructed by loss function. Note that the output of S has only 1 channel and we expand it channel-wise so that it matches the channel dimension of the ECG signal and at the same time each channel gets the same attention. The reason is that the 12 leads are measured synchronously and the abnormal beats occur at the same time across all the leads. Both recurrent neural networks (RNN) and CNN are candidate architectures for many arrhythmia classification works. RNN takes an ECG signal as sequential data and is good at dealing with the temporal relationship. CNN focuses on recognition of shapes and patterns in ECG, thus is less sensitive to the relative position of abnormal beats with respect to normal ones. Because abnormal beats may occur randomly among normal beats, we decide to use CNN as the backbone of our classifier. The detailed implementation of C is shown in 1(b). 3.3 POOLING FOR WINDOW-STYLE ATTENTION We do not use the output of the segmenter L as the attention map directly but instead perform a pooling with large kernel size first. This is out of considerations for both interpretability and performance. Regarding interpretability, it is desirable that each abnormal beat is uniformly highlighted, i.e., the attention weights should be almost constant and smooth for all the samples within each abnormal beat. Regarding performance, it is desirable that the attention map A does not distort the shape of abnormal beats after it is applied to the input X . Pooling layer is the easiest way to achieve this goal, functioning as a sliding window over multiple samples in an ECG signal for global information extraction. Max pooling outputs the same value around a local maximum, and average pooling reduces fluctuation by averaging over multiple samples. The kernel size cannot be too large, which may fuse sharp changes from neighboring areas and lead to the loss of local information. Therefore, deciding the proper pooling kernel size is essentially finding a balance between local information and global information preservation. Through experiments to be shown in Section 5.3, we find that setting the kernel size as nearly half the length of a normal beat yields the best balance between performance and interpretability. Padding of zeros on both sides of the segmenter output L is implemented to keep the length of the resulting attention map A after pooling to be the same as the input X . Meanwhile, the polarization of QRS complex is a critical feature for ECG signal, while the commonly used pooling layers, like max pooling and average pooling, fail to control the sign of output, leading to differentiation performance degradation. Rectified linear unit (ReLu) σ(lc,m) = max(0, lc,m), where c and m denote channel number and spatial position in L respectively, is usually performed as an activation function to add non-linearity to neural network for stronger representation ability. In this work, we can apply ReLU on L before pooling so that the all the weights in A generated by the following max pooling are positive. Alternatively, we replace average pooling with L2 norm pooling that takes square root of the L2 norm of input. In that case, ReLU is not needed. The two pooling implementations can be expressed as: ac,m max = PMAX(L)c,m = max {σ(lc,m), σ(lc,m+1), ..., σ(lc,m+k)} (1) ac,m L2 = PL2(L)c,m = √√√√m+k∑ i=m l2c,i (2) where, ac,m is the mth data point in channel c of A and k is the kernel size for the pooling. 3.4 JOINT LEARNING Compared to traditional segmentation network, our segmenter S does not give a prediction on every point in the input ECG, instead we generate an attention map A and distinguish heartbeats by the amplitude of weights in A. The segmentation result is reflected in the after-attention signal X . During training, with only annotation for the arrhythmia differentiation task, the segmentation task is actually unsupervised. Unlike clustering or mutual learning, the popular unsupervised segmentation methods, we train the segmenter and classifier in an end-to-end manner and the gradient of classification loss is backpropagated to S for updating how the signal is segmented. 4 EXPERIMENTS 4.1 DATASET Our experiments are conducted on two datasets from different sources. The first dataset is collected by a MAC5500 machine at a sample rate of 500Hz. They are all from patients diagnosed with PVC. Further catheter ablation test is performed to confirm the origins of PVC, so all the labels are accurate. For every patient, experienced cardiologists exam the long ECG record and grabs a ECG segment that contains the PVC arrhythmia with fixed length of 5000 samples. In the dataset, there are totally 508 segments, including 135 cases of left ventricle (LV) and 373 cases of right ventricle (RV). Moreover, within left ventricle patients, 91 are left ventricle outflow origins (LVOT) and among right ventricle patients 332 are right ventricle outflow origins (RVOT). It is of clinical interest to classify between LV and RV, and between LVOT and RVOT, so we will explore both problems in our experiments. We will release this dataset to the public, which will be the first for the challenging problem of PVC differentiation. The other dataset is derived from the public MIT-BIH Arrhythmia dataset (Moody & Mark, 2001), which includes 48 recordings of 47 patients, all sampled at 360 Hz. There are two leads for every records. All heart beats in those recordings are annotated by expert cardiologists and arrhythmia types include PVC, atrial premature beat (APB), left/right bundle branch block beat, etc. We preprocess those ECG signals into segment with standard 2000 samples. More specifically, we accumulatively add beat of interest to a segment until its length will exceeds 2000 if the next beat is added. Then we pad zeros for that segment to the target size. Among those segments, we focus on 356 segments with only normal beats and APB and 2627 segments with only normal and PVC beats. As for preprocessing, we adopt a series of filters and remove the high frequency noise and baseline drift. Besides, we apply normalization to each lead independently so that the voltage ranges are all the same for the 12 leads. 4.2 EXPERIMENT SETTING All our codes are based on the open source machine learning library PyTorch. Architecture details of our segmenter and classifier are shown in Figure 1. As for the classifier, inside each block are two serialized Conv + BN + ReLU combinations. The dimensions of weights in the two-layer perceptron are 128 × 128 and 128 × 2 respectively. We choose Adam algorithm (Kingma & Ba, 2014) as our optimizer with initial learning rate set to 0.00001. The training epoch is set to 120 as we observe lowest validation loss and highest accuracy can be achieved by then. The loss function for the classification is negative log likelihood loss and we add weights for different classes due to the imbalanced distribution of the dataset. We apply five-fold cross-validation with different classes evenly distributed between folds, and the average performance is reported. We implement our method with L2 norm pooling as well as a combination of ReLu and max pooling at the output of the segmenter, as discussed in Section 3.3. All the hyperparameters remain the same 5 RESULTS 5.1 COMPARISON OF PVC DIFFERENTIATION PERFORMANCE The metrics we select for comparison include overall accuracy, specificity, sensitivity, and AUC (area under curve) of ROC (Receiver Operating Characteristic) curve. We evaluate the performance of all the methods on two tasks: differentiating PVC originating in LV and RV ((RV,LV) task), as well as PVC originating in LVOT and RVOT ((RVOT, LVOT) task). For (RV, LV) task, the specificity and sensitivity are calculated with regards to RV, for (RVOT, LVOT), it is with regards to RVOT, and it is PVC for (PVC, APB) task. For all tasks, we calculate the AUC of ROC curve for each class and record the average value. Table 1 lists the results for three baseline methods and our methods with two different pooling. A few meaningful observations can be made out of the table. Firstly, in general our methods show higher score in almost all benchmarks including accuracy, specificity, sensitivity and AUC, than the baseline methods. This proves that our attention mechanism indeed improves the classifier’s capability of PVC origin classification. Secondly, using L2 norm pooling has better performance than using the combination of ReLU and max pooling, which implies the limitation of ReLU which may lose information in negative values. In contrast, L2 norm pooling preserves the negative information. Finally, the cascaded segmenter and classifier method, which has the same number of layers as our methods, has poor performance. This confirms that the better classification performance of our methods actually comes from the attention mechanism instead of deeper architecture. Actually, from Figure 2 we can see that there is a large gap between the training loss and the validation loss for the cascaded segmenter and classifier method after several epochs, suggesting apparent overfitting. Moreover, the benefits of adding a segmenter to the MIT-BIH dataset seems not large, but it is due to that distinguishing between PVC and APB is much less challenging than differentiating various origins of PVC. Also, the baseline already achieves good performance. 5.2 EVALUATION OF SEGMENTATION AND INTERPRETABILITY Three visual examples of the segmentation results are shown in Figure 3. From the figure we can see that after applying the attention map to the original ECG signal, the location of the abnormal beats can be easily identified. We design an independent and blind grade study by an experienced cardiologist to qualitatively evaluate our segmenter’s ability to detect abnormal beats for the (RVOT, LVOT) task. In general, qualitative evaluation is widely used in attention mechanism related works due to simplicity and visualization (Hu, 2019). It is also most suitable to judge the intepretability perspective of the results. Focusing on the after-attention signal X , we categorize the segmentation result into three classes by the contrast between abnormal beats and normal beats, as shown in Figure 4. Class I: all normal beats are eliminated, and all abnormal beats are kept. Best interpretability is attained. Class II: some normal beats still remain, and all abnormal beats are kept. In this case, interpretability is reduced but all abnormal beats can still be identified in X . Class III: there is no significant difference between abnormal beats and normal beats. There is little interpretability. We randomly select the after-attention signals of 100 ECG segments and the blind grade study result shows that the number of cases in the three classes are 50:27:23 (Class I: Class II: Class III), which implies superior performance of our segmenter. In practice, these segmentation results provide cardiologists quick understanding of why the prediction is made. 5.3 INFLUENCE OF KERNEL SIZE Fig 5 shows the comparison of classification performance and segmentation result of our framework with different kernel sizes in the L2 norm pooling layer at the output of the segmenter regarding the (RVOT, LVOT) task. We can see apparent degradation of accuracy, specificity, and AUC when the kernel size is increased to 300 while the performance difference between kernel size 100 and 200 is minor. Regarding the segmentation result, additional grade studies are conducted on kernel size 100 and kernel size 300. The ratios between the number of cases in the three classes when the kernel size is 200 is close to those when kernel size is 300, showing comparable ability of abnormal beats detection. There is a much higher number of class III cases for kernel size 100, suggesting that too small kernels may suffer from poor segmentation and low intrepretability in accordance with the analysis in Section 3.3. After weighing interpretbility and performance, we choose the kernel size of 200. 6 CONCLUSION AND DISCUSSION In this paper, we propose a novel framework combining unsupervised abnormal beats segmentation and supervised arrhythmia differentiation. The key to the multitask learning is applying an attention map generated by a segmenter directly to input data before the classification task. In addition, we perform a large-kernel pooling layer to constrain the shape of attention map for better performance and easier interpretability. We use premature ventricular contraction differentiating, one of the most challenging problems in arrhythmia classification as a case study to evaluate effectiveness of our method. On one hand, experiment result demonstrates better accuracy with the help of attention map. On the other hand, we observe obvious discrimination between abnormal beats and normal beats from after-attention signal. Indicating enhanced interpretability in clinical practice. In the future, we expect to extend our method to high-dimension data such as images and videos. In our opinion, the difficulties in applying our framework on 2-D /3-D data are more complicated background information and the need for fine-grained constraint on the attention map shape. When doing arrhythmia classification, the “background” in ECG signal is just normal beats. As for image classification, the “background” can be more complex, like birds flying among flowers and cars driving through streets, etc. Learning the difference between target objects and environment may be harder for the segmenter if without labels. On the other hand, the target objects can have higher variations in terms of size, shape and texture, even within the same class, which requires a more elaborate design of constraints on the attention map shape.
1. What is the main contribution of the paper regarding neural architecture for arrhythmia classification? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the empirical improvement and the choice of hyperparameters? 4. Are there any concerns about the generalization error and the significance of the results?
Review
Review This manuscript contributes a neural architecture to classify arrhythmia type from ECG data. The signal treated as 1D, and the architecture performs joint segmentation-classification detecting the abnormal beats and then classifying them as a function of their origine. It uses U-nets for segmentation and, for classification CNN and one fully-connected layer. The unet segmentation generates weights that are considered as an attention map and multipled with the original time series after pooling on a window (which amounts to smoothing). Compared to the prior art, the central contribution put forward is the addition of the segmentation component of the architecture. The work is light on theory and the contribution mostly resides on the empirical improvement. However, the evidence for this improvement is not rock solid, as it is shown on a single dataset, which has a rather small sample size. Also, I fear that hyper-parameters are not set fully independent of the final error measure. How are hyper-parameters (such as learning rate or architecture parameters) chosen? Given the procedure exposed in section 5.2, it seems to me that some of the architecture parameters (kernel size) where not chosen independently of the test set. Such choice will incur a positive bias with regards to the actual expected generalization error. With n=500 and an accuracy of 90%, the p=.05 confidence interval of a binomial model is 5%. Hence, the improvements observed by adding the segmentation on top of the classifier do not seem really significant.
ICLR
Title Attention Based Joint Learning for Supervised Electrocardiogram Arrhythmia Differentiation with Unsupervised Abnormal Beat Segmentation Abstract Deep learning has shown great promise in arrhythmia classification in electrocardiogram (ECG). Existing works, when classifying an ECG segment with multiple beats, do not identify the locations of the anomalies, which reduces clinical interpretability. On the other hand, segmenting abnormal beats by deep learning usually requires annotation for a large number of regular and irregular beats, which can be laborious, sometimes even challenging, with strong inter-observer variability between experts. In this work, we propose a method capable of not only differentiating arrhythmia but also segmenting the associated abnormal beats in the ECG segment. The only annotation used in the training is the type of abnormal beats and no segmentation labels are needed. Imitating human’s perception of an ECG signal, the framework consists of a segmenter and classifier. The segmenter outputs an attention map, which aims to highlight the abnormal sections in the ECG by element-wise modulation. Afterwards, the signals are sent to a classifier for arrhythmia differentiation. Though the training data is only labeled to supervise the classifier, the segmenter and the classifier are trained in an end-to-end manner so that optimizing classification performance also adjusts how the abnormal beats are segmented. Validation of our method is conducted on two dataset. We observe that involving the unsupervised segmentation in fact boosts the classification performance. Meanwhile, a grade study performed by experts suggests that the segmenter also achieves satisfactory quality in identifying abnormal beats, which significantly enhances the interpretability of the classification results. 1 INTRODUCTION Arrhythmia in electrocardiogram (ECG) is a reflection of heart conduction abnormality and occurs randomly among normal beats. Deep learning based methods have demonstrated strong power in classifying different types of arrhythmia. There are plenty of works on classifying a single beat, involving convolutional neural networks (CNN) (Acharya et al., 2017b; Zubair et al., 2016), long short-term memory (LSTM) (Yildirim, 2018), and generative adversarial networks (GAN) (Golany & Radinsky, 2019). For these methods to work in clinical setting, however, a good segmenter is needed to accurately extract a single beat from an ECG segment, which may be hard when abnormal beats are present. Alternatively, other works (Acharya et al., 2017a; Hannun et al., 2019) try to directly identify the genres of arrhythmia present in an ECG segment. The limitation of these works is that they work as a black-box and fail to provide cardiologists with any clue on how the prediction is made such as the location of the associated abnormal beats. In terms of ECG segmentation, there are different tasks such as segmenting ECG records into beats or into P wave, QRS complexity, and T wave. On one hand, some existing works take advantage of signal processing techniques to locate some fiducial points of PQRST complex so that the ECG signals can be divided. For example, Pan-Tompkins algorithm (Pan & Tompkins, 1985) uses a combination of filters, squaring, and moving window integration to detect QRS complexity. The shortcomings of these methods are that handcraft selection of filter parameters and threshold is needed. More importantly, they are unable to distinguish abnormal heartbeats from normal ones. To address these issues, Moskalenko et al. (2019); Oh et al. (2019) deploy CNNs for automatic beat segmentation. However, the quality of these methods highly depends on the labels for fiducial points of ECG signals, the annotation process of which can be laborious and sometimes very hard. Besides, due to the high morphological variation of arrhythmia, strong variations exist even between annotations from experienced cardiologists. As such, unsupervised learning based approaches might be a better choice. Inspired by human’s perception of ECG signals, our proposed framework firstly locates the abnormal beats in an ECG segment in the form of attention map and then does abnormal beats classification by focusing on these abnormal beats. Thus, the framework not only differentiates arrhythmia types but also identifies the location of the associated abnormal beats for better interpretability of the result. It is worth noting that, in our workflow, we only make use of annotation for the type of abnormality in each ECG segment without abnormal beat localization information during training, given the difficulty and tedious effort in obtaining the latter. We validate our methods on two datasets from different sources. The first one contains 508 12-lead ECG records of Premature Ventricular Contraction patients, which are categorized into different classes by the origin of premature contraction (e.g., left ventricle (LV) or right ventricle (RV)). For the other dataset, we process signals in the MIT-BIH Arrhythmia dataset into segments of standard length. This dataset includes various types of abnormal beats, and we select 2627 segments with PVC present and 356 segemnts with Atrial Premature Beat (APB) present. Experiments on both two dataset show quantitative evidence that introducing the segmentation of abnormal beats through an attention map, although unsupervised, can in fact benefit the arrhythmia classification performance as measured by accuracy, sensitivity, specificity, and area under Receiver Operating Characteristic (ROC) curve. At the same time, a grade study by experts qualitatively demonstrates our method’s promising capability to segment abnormal beats among normal ones, which can provide useful insight into the classification result. Our code and dataset, which is the first for the challenging PVC differentiation problem, will be released to the public. 2 RELATED WORKS Multitask learning There are many works devoted to training one deep learning models for multitasks rather than one specific task, like simultaneous segmentation and classification. (Yang et al., 2017) solves skin lesion segmentation and classification at the same time by utilizing similarities and differences across tasks. In the area of ECG signals, (Oh et al., 2019) modifies UNet to output the localization of r peaks and arrhythmia prediction simultaneously. What those two works have in common is that different tasks share certain layers in feature extraction. In contrast, our segmenter and classifier are independent models and there is no layer sharing between them. As can be seen in Figure 1, we use attention maps as a bridge connecting the two models. (Mehta et al., 2018) segments different types of issues in breast biopsy images with a UNet and apply a discriminative map generated by a subbranch of the UNet to the segmentation result as input to a MLP for diagnosis. However, their segmentation and classification tasks are not trained end-to-end. (Zhou et al., 2019) proposes a method for collaborative learning of disease grading and lesion segmentation. They first perform a traditional semantic segmentation task with a small portion of annotated labels, and then they jointly train the segmenter and classifier for fine-tuning with an attention mechanism, which is applied on the latent features in the classification model, different from our method. Another difference is that for most existing multitask learning works, labels for each task are necessary, i.e., all tasks are supervised. Our method, on the other hand, only requires the labels of one task (classification), leading to a joint supervised/unsupervised scheme. Attention mechanism After firstly proposed for machine translation (Bahdanau et al., 2014), attention model became a prevalent concept in deep learning and leads to improved performance in various tasks in natural language processing and computer visions. (Vaswani et al., 2017) exploits self-attention in their encoder-decoder architecture to draw dependency between input and output sentences. (Wang et al., 2017) builds a very deep network with attention modules which generates attention-aware features for image classification and (Oktay et al., 2018) integrates attention gates into U-Net (Ronneberger et al., 2015) to highlight latent channels informative for segmentation task. When it comes to ECG, (Hong et al., 2019) proposes a multilevel knowledge guided attention network to discriminate Atrial fibrillation (AF) patients, making the learned models explainable at beat level, rhythm level, and frequency level, which is highly related to our work. Our method and theirs however are quite different in the way attention weights are derived and applied, as well as the output of attention network. First, in that work, the attention weights are obtained from the outputs of hidden layers, while ours are directly from the input. Second, domain knowledge about AF is needed to help the attention extraction, so the process is weakly supervised, while ours do not use any external information and is fully unsupervised. Third, their attention weights are applied to latent features in that work while ours are applied to the input for better interpretability. Finally, in that work, the input ECG segment is divided into equal-length segments in advance and the attention network output only indicates which segment contains the target arrhythmia. The quality highly depends on how the segment is divided, and it does not provide the exact locations of abnormal beats. On the other hand, our method directly locates the abnormal beats on the entire input ECG, offering potentially better interpretability and robustness. 3 METHOD 3.1 OVERVIEW OF THE FRAMEWORK Here we briefly introduce the workflow of our joint learning frameworks for supervised classification and unsupervised segmentation. Firstly, in this work, we choose to model the input signal as a one-dimensional signal D ∈ RM×N , where M is the number of leads and N is the length of the input ECG segment (number of samples over time). We then use a one-dimensional (1D) fully convolutional network called segmenter S to output a feature map L = S(D) ∈ RM×N , After that, we apply a pooling layer to generate window-style element-wise attention A ∈ RM×N , containing weights directly for every sample in the input ECG. The after-attention signalX = A D ∈ RM×N , where represents element-wise production, is then fed into a multi-layer CNN called classifier C, in which the outermost fully connected layer gives the prediction of the arrhythmia types. After training, the abnormal areas are highlighted in X , thus achieving the goal of segmenting abnormal beats from normal ones. Moreover, x, which indicates those beats that are highly associated with the differentiation task, also serves as an explanation for C’s decision. The architecture of our framework is illustrated in Fig 1. 3.2 SEGMENTER AND CLASSIFIER In most existing works, the attention map is fused with the deep features in a neural network. However, for our specific purposes of enhancing intepretability of the classification results as well as unsupervised segmentation, the best result would be obtained by directly applying it to the input signal D. In order to generate attention weights of the same length as D, we choose to utilize UNet (Ronneberger et al., 2015), a fully convolutional network highlighted by the skip connection on different stages. Encoding path extract features recursively and decoding path reconstruct the data as instructed by loss function. Note that the output of S has only 1 channel and we expand it channel-wise so that it matches the channel dimension of the ECG signal and at the same time each channel gets the same attention. The reason is that the 12 leads are measured synchronously and the abnormal beats occur at the same time across all the leads. Both recurrent neural networks (RNN) and CNN are candidate architectures for many arrhythmia classification works. RNN takes an ECG signal as sequential data and is good at dealing with the temporal relationship. CNN focuses on recognition of shapes and patterns in ECG, thus is less sensitive to the relative position of abnormal beats with respect to normal ones. Because abnormal beats may occur randomly among normal beats, we decide to use CNN as the backbone of our classifier. The detailed implementation of C is shown in 1(b). 3.3 POOLING FOR WINDOW-STYLE ATTENTION We do not use the output of the segmenter L as the attention map directly but instead perform a pooling with large kernel size first. This is out of considerations for both interpretability and performance. Regarding interpretability, it is desirable that each abnormal beat is uniformly highlighted, i.e., the attention weights should be almost constant and smooth for all the samples within each abnormal beat. Regarding performance, it is desirable that the attention map A does not distort the shape of abnormal beats after it is applied to the input X . Pooling layer is the easiest way to achieve this goal, functioning as a sliding window over multiple samples in an ECG signal for global information extraction. Max pooling outputs the same value around a local maximum, and average pooling reduces fluctuation by averaging over multiple samples. The kernel size cannot be too large, which may fuse sharp changes from neighboring areas and lead to the loss of local information. Therefore, deciding the proper pooling kernel size is essentially finding a balance between local information and global information preservation. Through experiments to be shown in Section 5.3, we find that setting the kernel size as nearly half the length of a normal beat yields the best balance between performance and interpretability. Padding of zeros on both sides of the segmenter output L is implemented to keep the length of the resulting attention map A after pooling to be the same as the input X . Meanwhile, the polarization of QRS complex is a critical feature for ECG signal, while the commonly used pooling layers, like max pooling and average pooling, fail to control the sign of output, leading to differentiation performance degradation. Rectified linear unit (ReLu) σ(lc,m) = max(0, lc,m), where c and m denote channel number and spatial position in L respectively, is usually performed as an activation function to add non-linearity to neural network for stronger representation ability. In this work, we can apply ReLU on L before pooling so that the all the weights in A generated by the following max pooling are positive. Alternatively, we replace average pooling with L2 norm pooling that takes square root of the L2 norm of input. In that case, ReLU is not needed. The two pooling implementations can be expressed as: ac,m max = PMAX(L)c,m = max {σ(lc,m), σ(lc,m+1), ..., σ(lc,m+k)} (1) ac,m L2 = PL2(L)c,m = √√√√m+k∑ i=m l2c,i (2) where, ac,m is the mth data point in channel c of A and k is the kernel size for the pooling. 3.4 JOINT LEARNING Compared to traditional segmentation network, our segmenter S does not give a prediction on every point in the input ECG, instead we generate an attention map A and distinguish heartbeats by the amplitude of weights in A. The segmentation result is reflected in the after-attention signal X . During training, with only annotation for the arrhythmia differentiation task, the segmentation task is actually unsupervised. Unlike clustering or mutual learning, the popular unsupervised segmentation methods, we train the segmenter and classifier in an end-to-end manner and the gradient of classification loss is backpropagated to S for updating how the signal is segmented. 4 EXPERIMENTS 4.1 DATASET Our experiments are conducted on two datasets from different sources. The first dataset is collected by a MAC5500 machine at a sample rate of 500Hz. They are all from patients diagnosed with PVC. Further catheter ablation test is performed to confirm the origins of PVC, so all the labels are accurate. For every patient, experienced cardiologists exam the long ECG record and grabs a ECG segment that contains the PVC arrhythmia with fixed length of 5000 samples. In the dataset, there are totally 508 segments, including 135 cases of left ventricle (LV) and 373 cases of right ventricle (RV). Moreover, within left ventricle patients, 91 are left ventricle outflow origins (LVOT) and among right ventricle patients 332 are right ventricle outflow origins (RVOT). It is of clinical interest to classify between LV and RV, and between LVOT and RVOT, so we will explore both problems in our experiments. We will release this dataset to the public, which will be the first for the challenging problem of PVC differentiation. The other dataset is derived from the public MIT-BIH Arrhythmia dataset (Moody & Mark, 2001), which includes 48 recordings of 47 patients, all sampled at 360 Hz. There are two leads for every records. All heart beats in those recordings are annotated by expert cardiologists and arrhythmia types include PVC, atrial premature beat (APB), left/right bundle branch block beat, etc. We preprocess those ECG signals into segment with standard 2000 samples. More specifically, we accumulatively add beat of interest to a segment until its length will exceeds 2000 if the next beat is added. Then we pad zeros for that segment to the target size. Among those segments, we focus on 356 segments with only normal beats and APB and 2627 segments with only normal and PVC beats. As for preprocessing, we adopt a series of filters and remove the high frequency noise and baseline drift. Besides, we apply normalization to each lead independently so that the voltage ranges are all the same for the 12 leads. 4.2 EXPERIMENT SETTING All our codes are based on the open source machine learning library PyTorch. Architecture details of our segmenter and classifier are shown in Figure 1. As for the classifier, inside each block are two serialized Conv + BN + ReLU combinations. The dimensions of weights in the two-layer perceptron are 128 × 128 and 128 × 2 respectively. We choose Adam algorithm (Kingma & Ba, 2014) as our optimizer with initial learning rate set to 0.00001. The training epoch is set to 120 as we observe lowest validation loss and highest accuracy can be achieved by then. The loss function for the classification is negative log likelihood loss and we add weights for different classes due to the imbalanced distribution of the dataset. We apply five-fold cross-validation with different classes evenly distributed between folds, and the average performance is reported. We implement our method with L2 norm pooling as well as a combination of ReLu and max pooling at the output of the segmenter, as discussed in Section 3.3. All the hyperparameters remain the same 5 RESULTS 5.1 COMPARISON OF PVC DIFFERENTIATION PERFORMANCE The metrics we select for comparison include overall accuracy, specificity, sensitivity, and AUC (area under curve) of ROC (Receiver Operating Characteristic) curve. We evaluate the performance of all the methods on two tasks: differentiating PVC originating in LV and RV ((RV,LV) task), as well as PVC originating in LVOT and RVOT ((RVOT, LVOT) task). For (RV, LV) task, the specificity and sensitivity are calculated with regards to RV, for (RVOT, LVOT), it is with regards to RVOT, and it is PVC for (PVC, APB) task. For all tasks, we calculate the AUC of ROC curve for each class and record the average value. Table 1 lists the results for three baseline methods and our methods with two different pooling. A few meaningful observations can be made out of the table. Firstly, in general our methods show higher score in almost all benchmarks including accuracy, specificity, sensitivity and AUC, than the baseline methods. This proves that our attention mechanism indeed improves the classifier’s capability of PVC origin classification. Secondly, using L2 norm pooling has better performance than using the combination of ReLU and max pooling, which implies the limitation of ReLU which may lose information in negative values. In contrast, L2 norm pooling preserves the negative information. Finally, the cascaded segmenter and classifier method, which has the same number of layers as our methods, has poor performance. This confirms that the better classification performance of our methods actually comes from the attention mechanism instead of deeper architecture. Actually, from Figure 2 we can see that there is a large gap between the training loss and the validation loss for the cascaded segmenter and classifier method after several epochs, suggesting apparent overfitting. Moreover, the benefits of adding a segmenter to the MIT-BIH dataset seems not large, but it is due to that distinguishing between PVC and APB is much less challenging than differentiating various origins of PVC. Also, the baseline already achieves good performance. 5.2 EVALUATION OF SEGMENTATION AND INTERPRETABILITY Three visual examples of the segmentation results are shown in Figure 3. From the figure we can see that after applying the attention map to the original ECG signal, the location of the abnormal beats can be easily identified. We design an independent and blind grade study by an experienced cardiologist to qualitatively evaluate our segmenter’s ability to detect abnormal beats for the (RVOT, LVOT) task. In general, qualitative evaluation is widely used in attention mechanism related works due to simplicity and visualization (Hu, 2019). It is also most suitable to judge the intepretability perspective of the results. Focusing on the after-attention signal X , we categorize the segmentation result into three classes by the contrast between abnormal beats and normal beats, as shown in Figure 4. Class I: all normal beats are eliminated, and all abnormal beats are kept. Best interpretability is attained. Class II: some normal beats still remain, and all abnormal beats are kept. In this case, interpretability is reduced but all abnormal beats can still be identified in X . Class III: there is no significant difference between abnormal beats and normal beats. There is little interpretability. We randomly select the after-attention signals of 100 ECG segments and the blind grade study result shows that the number of cases in the three classes are 50:27:23 (Class I: Class II: Class III), which implies superior performance of our segmenter. In practice, these segmentation results provide cardiologists quick understanding of why the prediction is made. 5.3 INFLUENCE OF KERNEL SIZE Fig 5 shows the comparison of classification performance and segmentation result of our framework with different kernel sizes in the L2 norm pooling layer at the output of the segmenter regarding the (RVOT, LVOT) task. We can see apparent degradation of accuracy, specificity, and AUC when the kernel size is increased to 300 while the performance difference between kernel size 100 and 200 is minor. Regarding the segmentation result, additional grade studies are conducted on kernel size 100 and kernel size 300. The ratios between the number of cases in the three classes when the kernel size is 200 is close to those when kernel size is 300, showing comparable ability of abnormal beats detection. There is a much higher number of class III cases for kernel size 100, suggesting that too small kernels may suffer from poor segmentation and low intrepretability in accordance with the analysis in Section 3.3. After weighing interpretbility and performance, we choose the kernel size of 200. 6 CONCLUSION AND DISCUSSION In this paper, we propose a novel framework combining unsupervised abnormal beats segmentation and supervised arrhythmia differentiation. The key to the multitask learning is applying an attention map generated by a segmenter directly to input data before the classification task. In addition, we perform a large-kernel pooling layer to constrain the shape of attention map for better performance and easier interpretability. We use premature ventricular contraction differentiating, one of the most challenging problems in arrhythmia classification as a case study to evaluate effectiveness of our method. On one hand, experiment result demonstrates better accuracy with the help of attention map. On the other hand, we observe obvious discrimination between abnormal beats and normal beats from after-attention signal. Indicating enhanced interpretability in clinical practice. In the future, we expect to extend our method to high-dimension data such as images and videos. In our opinion, the difficulties in applying our framework on 2-D /3-D data are more complicated background information and the need for fine-grained constraint on the attention map shape. When doing arrhythmia classification, the “background” in ECG signal is just normal beats. As for image classification, the “background” can be more complex, like birds flying among flowers and cars driving through streets, etc. Learning the difference between target objects and environment may be harder for the segmenter if without labels. On the other hand, the target objects can have higher variations in terms of size, shape and texture, even within the same class, which requires a more elaborate design of constraints on the attention map shape.
1. What is the focus and contribution of the paper on arrhythmia classification in ECG data? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its ad-hoc nature and lack of comparison to other state-of-the-art methods? 3. Do you have any questions or concerns about the paper's methodology, such as the segmentation and classification process, output, and attention map? 4. How clear and concise is the writing in the paper, and are there any parts that need further clarification or explanation? 5. What are the reviewer's overall assessments of the paper's quality, novelty, and reproducibility?
Review
Review The paper proposes a framework for the classification of arrhythmias in electrocardiogram (ECG) data. The proposed approach performs segmentation and classification of the ECG signal. The segmenter performs segmentation of the signal (also called attention map) even though the term segmentation is not quite correct. This attention-modulated signal is then classified to identify the origin of Premature Ventricular Contraction (PVC). The proposed approach is evaluated on a dataset from a single machine consisting of 508 segments (I am not sure what “segments” means in this context). The results seem ok, but it is not clear to me what level of performance is required in order to achieve a similar level of performance as an expert. Main concern is that the proposed approach seems rather ad-hoc: The combination of segmentation (or attention) and classification in a joint fashion seems hardly new and while the results obtained are good, there is no systematic evaluation how the method compares to other state-of-the-art ECG classification methods. Another problem is that the writing in the paper is not always clear and it is often unclear what exactly the authors are doing. As a result, it is quite difficult to exactly assess what the authors have done or what they mean. Detailed comments: • What is the output of the classifier? Is this a binary label? Or a multi-class label? • The authors write “… the output of S has only 1 channel and we expand it channel-wise so that it matches the channel dimension of the ECG signal …” – What exactly is meant here? In Fig 1 it seems that the segmentation output has naturally 12 channels? Should the segmentation be identical for all channels? • “We do not use the output of the segmenter L as the attention map directly but instead perform a pooling with large kernel size first” – Why is this done? What does “large kernel” mean? • Where is the attention map in Fig. 1? • How are the Premature Ventricular Contraction (PVC) origin labels defined? Is that a single time point (per channel or common for all channels) or a time window?
ICLR
Title Attention Based Joint Learning for Supervised Electrocardiogram Arrhythmia Differentiation with Unsupervised Abnormal Beat Segmentation Abstract Deep learning has shown great promise in arrhythmia classification in electrocardiogram (ECG). Existing works, when classifying an ECG segment with multiple beats, do not identify the locations of the anomalies, which reduces clinical interpretability. On the other hand, segmenting abnormal beats by deep learning usually requires annotation for a large number of regular and irregular beats, which can be laborious, sometimes even challenging, with strong inter-observer variability between experts. In this work, we propose a method capable of not only differentiating arrhythmia but also segmenting the associated abnormal beats in the ECG segment. The only annotation used in the training is the type of abnormal beats and no segmentation labels are needed. Imitating human’s perception of an ECG signal, the framework consists of a segmenter and classifier. The segmenter outputs an attention map, which aims to highlight the abnormal sections in the ECG by element-wise modulation. Afterwards, the signals are sent to a classifier for arrhythmia differentiation. Though the training data is only labeled to supervise the classifier, the segmenter and the classifier are trained in an end-to-end manner so that optimizing classification performance also adjusts how the abnormal beats are segmented. Validation of our method is conducted on two dataset. We observe that involving the unsupervised segmentation in fact boosts the classification performance. Meanwhile, a grade study performed by experts suggests that the segmenter also achieves satisfactory quality in identifying abnormal beats, which significantly enhances the interpretability of the classification results. 1 INTRODUCTION Arrhythmia in electrocardiogram (ECG) is a reflection of heart conduction abnormality and occurs randomly among normal beats. Deep learning based methods have demonstrated strong power in classifying different types of arrhythmia. There are plenty of works on classifying a single beat, involving convolutional neural networks (CNN) (Acharya et al., 2017b; Zubair et al., 2016), long short-term memory (LSTM) (Yildirim, 2018), and generative adversarial networks (GAN) (Golany & Radinsky, 2019). For these methods to work in clinical setting, however, a good segmenter is needed to accurately extract a single beat from an ECG segment, which may be hard when abnormal beats are present. Alternatively, other works (Acharya et al., 2017a; Hannun et al., 2019) try to directly identify the genres of arrhythmia present in an ECG segment. The limitation of these works is that they work as a black-box and fail to provide cardiologists with any clue on how the prediction is made such as the location of the associated abnormal beats. In terms of ECG segmentation, there are different tasks such as segmenting ECG records into beats or into P wave, QRS complexity, and T wave. On one hand, some existing works take advantage of signal processing techniques to locate some fiducial points of PQRST complex so that the ECG signals can be divided. For example, Pan-Tompkins algorithm (Pan & Tompkins, 1985) uses a combination of filters, squaring, and moving window integration to detect QRS complexity. The shortcomings of these methods are that handcraft selection of filter parameters and threshold is needed. More importantly, they are unable to distinguish abnormal heartbeats from normal ones. To address these issues, Moskalenko et al. (2019); Oh et al. (2019) deploy CNNs for automatic beat segmentation. However, the quality of these methods highly depends on the labels for fiducial points of ECG signals, the annotation process of which can be laborious and sometimes very hard. Besides, due to the high morphological variation of arrhythmia, strong variations exist even between annotations from experienced cardiologists. As such, unsupervised learning based approaches might be a better choice. Inspired by human’s perception of ECG signals, our proposed framework firstly locates the abnormal beats in an ECG segment in the form of attention map and then does abnormal beats classification by focusing on these abnormal beats. Thus, the framework not only differentiates arrhythmia types but also identifies the location of the associated abnormal beats for better interpretability of the result. It is worth noting that, in our workflow, we only make use of annotation for the type of abnormality in each ECG segment without abnormal beat localization information during training, given the difficulty and tedious effort in obtaining the latter. We validate our methods on two datasets from different sources. The first one contains 508 12-lead ECG records of Premature Ventricular Contraction patients, which are categorized into different classes by the origin of premature contraction (e.g., left ventricle (LV) or right ventricle (RV)). For the other dataset, we process signals in the MIT-BIH Arrhythmia dataset into segments of standard length. This dataset includes various types of abnormal beats, and we select 2627 segments with PVC present and 356 segemnts with Atrial Premature Beat (APB) present. Experiments on both two dataset show quantitative evidence that introducing the segmentation of abnormal beats through an attention map, although unsupervised, can in fact benefit the arrhythmia classification performance as measured by accuracy, sensitivity, specificity, and area under Receiver Operating Characteristic (ROC) curve. At the same time, a grade study by experts qualitatively demonstrates our method’s promising capability to segment abnormal beats among normal ones, which can provide useful insight into the classification result. Our code and dataset, which is the first for the challenging PVC differentiation problem, will be released to the public. 2 RELATED WORKS Multitask learning There are many works devoted to training one deep learning models for multitasks rather than one specific task, like simultaneous segmentation and classification. (Yang et al., 2017) solves skin lesion segmentation and classification at the same time by utilizing similarities and differences across tasks. In the area of ECG signals, (Oh et al., 2019) modifies UNet to output the localization of r peaks and arrhythmia prediction simultaneously. What those two works have in common is that different tasks share certain layers in feature extraction. In contrast, our segmenter and classifier are independent models and there is no layer sharing between them. As can be seen in Figure 1, we use attention maps as a bridge connecting the two models. (Mehta et al., 2018) segments different types of issues in breast biopsy images with a UNet and apply a discriminative map generated by a subbranch of the UNet to the segmentation result as input to a MLP for diagnosis. However, their segmentation and classification tasks are not trained end-to-end. (Zhou et al., 2019) proposes a method for collaborative learning of disease grading and lesion segmentation. They first perform a traditional semantic segmentation task with a small portion of annotated labels, and then they jointly train the segmenter and classifier for fine-tuning with an attention mechanism, which is applied on the latent features in the classification model, different from our method. Another difference is that for most existing multitask learning works, labels for each task are necessary, i.e., all tasks are supervised. Our method, on the other hand, only requires the labels of one task (classification), leading to a joint supervised/unsupervised scheme. Attention mechanism After firstly proposed for machine translation (Bahdanau et al., 2014), attention model became a prevalent concept in deep learning and leads to improved performance in various tasks in natural language processing and computer visions. (Vaswani et al., 2017) exploits self-attention in their encoder-decoder architecture to draw dependency between input and output sentences. (Wang et al., 2017) builds a very deep network with attention modules which generates attention-aware features for image classification and (Oktay et al., 2018) integrates attention gates into U-Net (Ronneberger et al., 2015) to highlight latent channels informative for segmentation task. When it comes to ECG, (Hong et al., 2019) proposes a multilevel knowledge guided attention network to discriminate Atrial fibrillation (AF) patients, making the learned models explainable at beat level, rhythm level, and frequency level, which is highly related to our work. Our method and theirs however are quite different in the way attention weights are derived and applied, as well as the output of attention network. First, in that work, the attention weights are obtained from the outputs of hidden layers, while ours are directly from the input. Second, domain knowledge about AF is needed to help the attention extraction, so the process is weakly supervised, while ours do not use any external information and is fully unsupervised. Third, their attention weights are applied to latent features in that work while ours are applied to the input for better interpretability. Finally, in that work, the input ECG segment is divided into equal-length segments in advance and the attention network output only indicates which segment contains the target arrhythmia. The quality highly depends on how the segment is divided, and it does not provide the exact locations of abnormal beats. On the other hand, our method directly locates the abnormal beats on the entire input ECG, offering potentially better interpretability and robustness. 3 METHOD 3.1 OVERVIEW OF THE FRAMEWORK Here we briefly introduce the workflow of our joint learning frameworks for supervised classification and unsupervised segmentation. Firstly, in this work, we choose to model the input signal as a one-dimensional signal D ∈ RM×N , where M is the number of leads and N is the length of the input ECG segment (number of samples over time). We then use a one-dimensional (1D) fully convolutional network called segmenter S to output a feature map L = S(D) ∈ RM×N , After that, we apply a pooling layer to generate window-style element-wise attention A ∈ RM×N , containing weights directly for every sample in the input ECG. The after-attention signalX = A D ∈ RM×N , where represents element-wise production, is then fed into a multi-layer CNN called classifier C, in which the outermost fully connected layer gives the prediction of the arrhythmia types. After training, the abnormal areas are highlighted in X , thus achieving the goal of segmenting abnormal beats from normal ones. Moreover, x, which indicates those beats that are highly associated with the differentiation task, also serves as an explanation for C’s decision. The architecture of our framework is illustrated in Fig 1. 3.2 SEGMENTER AND CLASSIFIER In most existing works, the attention map is fused with the deep features in a neural network. However, for our specific purposes of enhancing intepretability of the classification results as well as unsupervised segmentation, the best result would be obtained by directly applying it to the input signal D. In order to generate attention weights of the same length as D, we choose to utilize UNet (Ronneberger et al., 2015), a fully convolutional network highlighted by the skip connection on different stages. Encoding path extract features recursively and decoding path reconstruct the data as instructed by loss function. Note that the output of S has only 1 channel and we expand it channel-wise so that it matches the channel dimension of the ECG signal and at the same time each channel gets the same attention. The reason is that the 12 leads are measured synchronously and the abnormal beats occur at the same time across all the leads. Both recurrent neural networks (RNN) and CNN are candidate architectures for many arrhythmia classification works. RNN takes an ECG signal as sequential data and is good at dealing with the temporal relationship. CNN focuses on recognition of shapes and patterns in ECG, thus is less sensitive to the relative position of abnormal beats with respect to normal ones. Because abnormal beats may occur randomly among normal beats, we decide to use CNN as the backbone of our classifier. The detailed implementation of C is shown in 1(b). 3.3 POOLING FOR WINDOW-STYLE ATTENTION We do not use the output of the segmenter L as the attention map directly but instead perform a pooling with large kernel size first. This is out of considerations for both interpretability and performance. Regarding interpretability, it is desirable that each abnormal beat is uniformly highlighted, i.e., the attention weights should be almost constant and smooth for all the samples within each abnormal beat. Regarding performance, it is desirable that the attention map A does not distort the shape of abnormal beats after it is applied to the input X . Pooling layer is the easiest way to achieve this goal, functioning as a sliding window over multiple samples in an ECG signal for global information extraction. Max pooling outputs the same value around a local maximum, and average pooling reduces fluctuation by averaging over multiple samples. The kernel size cannot be too large, which may fuse sharp changes from neighboring areas and lead to the loss of local information. Therefore, deciding the proper pooling kernel size is essentially finding a balance between local information and global information preservation. Through experiments to be shown in Section 5.3, we find that setting the kernel size as nearly half the length of a normal beat yields the best balance between performance and interpretability. Padding of zeros on both sides of the segmenter output L is implemented to keep the length of the resulting attention map A after pooling to be the same as the input X . Meanwhile, the polarization of QRS complex is a critical feature for ECG signal, while the commonly used pooling layers, like max pooling and average pooling, fail to control the sign of output, leading to differentiation performance degradation. Rectified linear unit (ReLu) σ(lc,m) = max(0, lc,m), where c and m denote channel number and spatial position in L respectively, is usually performed as an activation function to add non-linearity to neural network for stronger representation ability. In this work, we can apply ReLU on L before pooling so that the all the weights in A generated by the following max pooling are positive. Alternatively, we replace average pooling with L2 norm pooling that takes square root of the L2 norm of input. In that case, ReLU is not needed. The two pooling implementations can be expressed as: ac,m max = PMAX(L)c,m = max {σ(lc,m), σ(lc,m+1), ..., σ(lc,m+k)} (1) ac,m L2 = PL2(L)c,m = √√√√m+k∑ i=m l2c,i (2) where, ac,m is the mth data point in channel c of A and k is the kernel size for the pooling. 3.4 JOINT LEARNING Compared to traditional segmentation network, our segmenter S does not give a prediction on every point in the input ECG, instead we generate an attention map A and distinguish heartbeats by the amplitude of weights in A. The segmentation result is reflected in the after-attention signal X . During training, with only annotation for the arrhythmia differentiation task, the segmentation task is actually unsupervised. Unlike clustering or mutual learning, the popular unsupervised segmentation methods, we train the segmenter and classifier in an end-to-end manner and the gradient of classification loss is backpropagated to S for updating how the signal is segmented. 4 EXPERIMENTS 4.1 DATASET Our experiments are conducted on two datasets from different sources. The first dataset is collected by a MAC5500 machine at a sample rate of 500Hz. They are all from patients diagnosed with PVC. Further catheter ablation test is performed to confirm the origins of PVC, so all the labels are accurate. For every patient, experienced cardiologists exam the long ECG record and grabs a ECG segment that contains the PVC arrhythmia with fixed length of 5000 samples. In the dataset, there are totally 508 segments, including 135 cases of left ventricle (LV) and 373 cases of right ventricle (RV). Moreover, within left ventricle patients, 91 are left ventricle outflow origins (LVOT) and among right ventricle patients 332 are right ventricle outflow origins (RVOT). It is of clinical interest to classify between LV and RV, and between LVOT and RVOT, so we will explore both problems in our experiments. We will release this dataset to the public, which will be the first for the challenging problem of PVC differentiation. The other dataset is derived from the public MIT-BIH Arrhythmia dataset (Moody & Mark, 2001), which includes 48 recordings of 47 patients, all sampled at 360 Hz. There are two leads for every records. All heart beats in those recordings are annotated by expert cardiologists and arrhythmia types include PVC, atrial premature beat (APB), left/right bundle branch block beat, etc. We preprocess those ECG signals into segment with standard 2000 samples. More specifically, we accumulatively add beat of interest to a segment until its length will exceeds 2000 if the next beat is added. Then we pad zeros for that segment to the target size. Among those segments, we focus on 356 segments with only normal beats and APB and 2627 segments with only normal and PVC beats. As for preprocessing, we adopt a series of filters and remove the high frequency noise and baseline drift. Besides, we apply normalization to each lead independently so that the voltage ranges are all the same for the 12 leads. 4.2 EXPERIMENT SETTING All our codes are based on the open source machine learning library PyTorch. Architecture details of our segmenter and classifier are shown in Figure 1. As for the classifier, inside each block are two serialized Conv + BN + ReLU combinations. The dimensions of weights in the two-layer perceptron are 128 × 128 and 128 × 2 respectively. We choose Adam algorithm (Kingma & Ba, 2014) as our optimizer with initial learning rate set to 0.00001. The training epoch is set to 120 as we observe lowest validation loss and highest accuracy can be achieved by then. The loss function for the classification is negative log likelihood loss and we add weights for different classes due to the imbalanced distribution of the dataset. We apply five-fold cross-validation with different classes evenly distributed between folds, and the average performance is reported. We implement our method with L2 norm pooling as well as a combination of ReLu and max pooling at the output of the segmenter, as discussed in Section 3.3. All the hyperparameters remain the same 5 RESULTS 5.1 COMPARISON OF PVC DIFFERENTIATION PERFORMANCE The metrics we select for comparison include overall accuracy, specificity, sensitivity, and AUC (area under curve) of ROC (Receiver Operating Characteristic) curve. We evaluate the performance of all the methods on two tasks: differentiating PVC originating in LV and RV ((RV,LV) task), as well as PVC originating in LVOT and RVOT ((RVOT, LVOT) task). For (RV, LV) task, the specificity and sensitivity are calculated with regards to RV, for (RVOT, LVOT), it is with regards to RVOT, and it is PVC for (PVC, APB) task. For all tasks, we calculate the AUC of ROC curve for each class and record the average value. Table 1 lists the results for three baseline methods and our methods with two different pooling. A few meaningful observations can be made out of the table. Firstly, in general our methods show higher score in almost all benchmarks including accuracy, specificity, sensitivity and AUC, than the baseline methods. This proves that our attention mechanism indeed improves the classifier’s capability of PVC origin classification. Secondly, using L2 norm pooling has better performance than using the combination of ReLU and max pooling, which implies the limitation of ReLU which may lose information in negative values. In contrast, L2 norm pooling preserves the negative information. Finally, the cascaded segmenter and classifier method, which has the same number of layers as our methods, has poor performance. This confirms that the better classification performance of our methods actually comes from the attention mechanism instead of deeper architecture. Actually, from Figure 2 we can see that there is a large gap between the training loss and the validation loss for the cascaded segmenter and classifier method after several epochs, suggesting apparent overfitting. Moreover, the benefits of adding a segmenter to the MIT-BIH dataset seems not large, but it is due to that distinguishing between PVC and APB is much less challenging than differentiating various origins of PVC. Also, the baseline already achieves good performance. 5.2 EVALUATION OF SEGMENTATION AND INTERPRETABILITY Three visual examples of the segmentation results are shown in Figure 3. From the figure we can see that after applying the attention map to the original ECG signal, the location of the abnormal beats can be easily identified. We design an independent and blind grade study by an experienced cardiologist to qualitatively evaluate our segmenter’s ability to detect abnormal beats for the (RVOT, LVOT) task. In general, qualitative evaluation is widely used in attention mechanism related works due to simplicity and visualization (Hu, 2019). It is also most suitable to judge the intepretability perspective of the results. Focusing on the after-attention signal X , we categorize the segmentation result into three classes by the contrast between abnormal beats and normal beats, as shown in Figure 4. Class I: all normal beats are eliminated, and all abnormal beats are kept. Best interpretability is attained. Class II: some normal beats still remain, and all abnormal beats are kept. In this case, interpretability is reduced but all abnormal beats can still be identified in X . Class III: there is no significant difference between abnormal beats and normal beats. There is little interpretability. We randomly select the after-attention signals of 100 ECG segments and the blind grade study result shows that the number of cases in the three classes are 50:27:23 (Class I: Class II: Class III), which implies superior performance of our segmenter. In practice, these segmentation results provide cardiologists quick understanding of why the prediction is made. 5.3 INFLUENCE OF KERNEL SIZE Fig 5 shows the comparison of classification performance and segmentation result of our framework with different kernel sizes in the L2 norm pooling layer at the output of the segmenter regarding the (RVOT, LVOT) task. We can see apparent degradation of accuracy, specificity, and AUC when the kernel size is increased to 300 while the performance difference between kernel size 100 and 200 is minor. Regarding the segmentation result, additional grade studies are conducted on kernel size 100 and kernel size 300. The ratios between the number of cases in the three classes when the kernel size is 200 is close to those when kernel size is 300, showing comparable ability of abnormal beats detection. There is a much higher number of class III cases for kernel size 100, suggesting that too small kernels may suffer from poor segmentation and low intrepretability in accordance with the analysis in Section 3.3. After weighing interpretbility and performance, we choose the kernel size of 200. 6 CONCLUSION AND DISCUSSION In this paper, we propose a novel framework combining unsupervised abnormal beats segmentation and supervised arrhythmia differentiation. The key to the multitask learning is applying an attention map generated by a segmenter directly to input data before the classification task. In addition, we perform a large-kernel pooling layer to constrain the shape of attention map for better performance and easier interpretability. We use premature ventricular contraction differentiating, one of the most challenging problems in arrhythmia classification as a case study to evaluate effectiveness of our method. On one hand, experiment result demonstrates better accuracy with the help of attention map. On the other hand, we observe obvious discrimination between abnormal beats and normal beats from after-attention signal. Indicating enhanced interpretability in clinical practice. In the future, we expect to extend our method to high-dimension data such as images and videos. In our opinion, the difficulties in applying our framework on 2-D /3-D data are more complicated background information and the need for fine-grained constraint on the attention map shape. When doing arrhythmia classification, the “background” in ECG signal is just normal beats. As for image classification, the “background” can be more complex, like birds flying among flowers and cars driving through streets, etc. Learning the difference between target objects and environment may be harder for the segmenter if without labels. On the other hand, the target objects can have higher variations in terms of size, shape and texture, even within the same class, which requires a more elaborate design of constraints on the attention map shape.
1. What is the main contribution of the paper regarding ECG signal processing? 2. How does the proposed approach differ from prior works in terms of task specific changes? 3. How might the approach generalize to other tasks beyond PVC detection? 4. Why did the authors choose not to perform experiments on public datasets like PhysioNet? 5. How might transfer learning results enhance the paper's findings? 6. How might the presentation of classification metrics on their own improve Table 1? 7. Are there any concerns regarding individual variation in physiological signals? 8. Can the authors provide more information about the train/val/test splits used in their experiments?
Review
Review This paper presents a method for segmentation and classification of ECG data applied to the task to segmenting and detecting Premature Ventricular Contractions (PVC). The taks is semi-supervised, in the sense that segmentation labels are not required by labels for the PVC events (classification) are used. The authors motivate this application quite well and detecting abnormalities in ECG signals is an important task of clinical relevance. I can understand why segmentation labels may be very laborious to collect and unsupervised methods would be desirable. The proposed approach builds upon U-Net and introduces some task specific changes. However, I would argue that this is primarily an application paper. I don't mean that as a criticism necessarily, I think that strong and well motivated applications of machine learning are important and informative. However, it would be helpful if the authors could discuss more about how their approach might generalize to other tasks, both the detection of other types of arrythmias and other temporal segmentation and classification tasks. My main comments regarding the paper are around the experimental evalutation. The authors highlight that there are some published baselines for this task or at least similar related works (e.g., Moskalenko et al. (2019); Oh et al. (2019)) and/or the authors could have applied classification on top of features extracted using Pan-Tompkins - but that would be a more crude baseline. While I recognize that these approaches might not enable unsuperivsed segmentation and so direct comparisons on that might be hard with the full approach they propose. It might be possible to present a comparison of classification metrics on their own. Perhaps I am misunderstanding but it doesn't seem as though Table 1 includes such a comparison, rather the baselines are different from the previous published methods - is that correct? I would almost describe Table 1 as ablation results rather than a comparison with other published baselines. I'd like to know the author's response to that and if Table 1 does show these results perhaps linking the rows to the previous approaches might be helpful? Or justifying why it isn't appropriate to show these comparisons. I don't say this just because the authors should show better numbers, but rather to ground the chose baselines in the context of previous work in this space. Building from the previous point. I think this paper would be an excellent case for for showing transfer learning results, it seems to me that PhysioNet provides a large amount of available data for ECG classification. A couple of question I'd like to hear the authors responses to: Why did they not do any experiments on these public datasets? Is there a reason they are not appropriate? Do they not have the right labels, are they not large enough, do you need full 12 lead recordings (I am not sure if they are avaiable on PhysioNet datasets - but I imagine so.) Even if training your method on your dataset is preferable, it would seem natural to test it on a set from PhysioNet, perhaps even with a different type of arrythmia, to see how much performance degrades? This I think would be most informative, both showing segmentation and classification results. Fig. 3 is a nice illustration, but it is quite difficult to read. I might suggest reorganizing it. I am not sure showing multiple leads is necessary and maybe limiting to two columns might help. I'd encourage the authors to leverage supplementary material to show more examples as I do think these help. Finally, physiological signals are notorious for having large individual variation. I'd be interested to have the authors discuss more about this. I couldn't find the information about how the train/val/test splits were organized and whether this was person independent etc. The following sentence in Section 4.2 "We apply five-fold cross-validation with different classes evenly distributed between folds, and the average performance is reported" doesn't seem to mention that. Knowing more about the splits would be very helpful. This is perhaps another reason that performing experiments on at least one PhysioNet dataset would be helpful as the train, val, test splits could be released. But I acknowledge that the authors say they will release their data which is good.
ICLR
Title Attention Based Joint Learning for Supervised Electrocardiogram Arrhythmia Differentiation with Unsupervised Abnormal Beat Segmentation Abstract Deep learning has shown great promise in arrhythmia classification in electrocardiogram (ECG). Existing works, when classifying an ECG segment with multiple beats, do not identify the locations of the anomalies, which reduces clinical interpretability. On the other hand, segmenting abnormal beats by deep learning usually requires annotation for a large number of regular and irregular beats, which can be laborious, sometimes even challenging, with strong inter-observer variability between experts. In this work, we propose a method capable of not only differentiating arrhythmia but also segmenting the associated abnormal beats in the ECG segment. The only annotation used in the training is the type of abnormal beats and no segmentation labels are needed. Imitating human’s perception of an ECG signal, the framework consists of a segmenter and classifier. The segmenter outputs an attention map, which aims to highlight the abnormal sections in the ECG by element-wise modulation. Afterwards, the signals are sent to a classifier for arrhythmia differentiation. Though the training data is only labeled to supervise the classifier, the segmenter and the classifier are trained in an end-to-end manner so that optimizing classification performance also adjusts how the abnormal beats are segmented. Validation of our method is conducted on two dataset. We observe that involving the unsupervised segmentation in fact boosts the classification performance. Meanwhile, a grade study performed by experts suggests that the segmenter also achieves satisfactory quality in identifying abnormal beats, which significantly enhances the interpretability of the classification results. 1 INTRODUCTION Arrhythmia in electrocardiogram (ECG) is a reflection of heart conduction abnormality and occurs randomly among normal beats. Deep learning based methods have demonstrated strong power in classifying different types of arrhythmia. There are plenty of works on classifying a single beat, involving convolutional neural networks (CNN) (Acharya et al., 2017b; Zubair et al., 2016), long short-term memory (LSTM) (Yildirim, 2018), and generative adversarial networks (GAN) (Golany & Radinsky, 2019). For these methods to work in clinical setting, however, a good segmenter is needed to accurately extract a single beat from an ECG segment, which may be hard when abnormal beats are present. Alternatively, other works (Acharya et al., 2017a; Hannun et al., 2019) try to directly identify the genres of arrhythmia present in an ECG segment. The limitation of these works is that they work as a black-box and fail to provide cardiologists with any clue on how the prediction is made such as the location of the associated abnormal beats. In terms of ECG segmentation, there are different tasks such as segmenting ECG records into beats or into P wave, QRS complexity, and T wave. On one hand, some existing works take advantage of signal processing techniques to locate some fiducial points of PQRST complex so that the ECG signals can be divided. For example, Pan-Tompkins algorithm (Pan & Tompkins, 1985) uses a combination of filters, squaring, and moving window integration to detect QRS complexity. The shortcomings of these methods are that handcraft selection of filter parameters and threshold is needed. More importantly, they are unable to distinguish abnormal heartbeats from normal ones. To address these issues, Moskalenko et al. (2019); Oh et al. (2019) deploy CNNs for automatic beat segmentation. However, the quality of these methods highly depends on the labels for fiducial points of ECG signals, the annotation process of which can be laborious and sometimes very hard. Besides, due to the high morphological variation of arrhythmia, strong variations exist even between annotations from experienced cardiologists. As such, unsupervised learning based approaches might be a better choice. Inspired by human’s perception of ECG signals, our proposed framework firstly locates the abnormal beats in an ECG segment in the form of attention map and then does abnormal beats classification by focusing on these abnormal beats. Thus, the framework not only differentiates arrhythmia types but also identifies the location of the associated abnormal beats for better interpretability of the result. It is worth noting that, in our workflow, we only make use of annotation for the type of abnormality in each ECG segment without abnormal beat localization information during training, given the difficulty and tedious effort in obtaining the latter. We validate our methods on two datasets from different sources. The first one contains 508 12-lead ECG records of Premature Ventricular Contraction patients, which are categorized into different classes by the origin of premature contraction (e.g., left ventricle (LV) or right ventricle (RV)). For the other dataset, we process signals in the MIT-BIH Arrhythmia dataset into segments of standard length. This dataset includes various types of abnormal beats, and we select 2627 segments with PVC present and 356 segemnts with Atrial Premature Beat (APB) present. Experiments on both two dataset show quantitative evidence that introducing the segmentation of abnormal beats through an attention map, although unsupervised, can in fact benefit the arrhythmia classification performance as measured by accuracy, sensitivity, specificity, and area under Receiver Operating Characteristic (ROC) curve. At the same time, a grade study by experts qualitatively demonstrates our method’s promising capability to segment abnormal beats among normal ones, which can provide useful insight into the classification result. Our code and dataset, which is the first for the challenging PVC differentiation problem, will be released to the public. 2 RELATED WORKS Multitask learning There are many works devoted to training one deep learning models for multitasks rather than one specific task, like simultaneous segmentation and classification. (Yang et al., 2017) solves skin lesion segmentation and classification at the same time by utilizing similarities and differences across tasks. In the area of ECG signals, (Oh et al., 2019) modifies UNet to output the localization of r peaks and arrhythmia prediction simultaneously. What those two works have in common is that different tasks share certain layers in feature extraction. In contrast, our segmenter and classifier are independent models and there is no layer sharing between them. As can be seen in Figure 1, we use attention maps as a bridge connecting the two models. (Mehta et al., 2018) segments different types of issues in breast biopsy images with a UNet and apply a discriminative map generated by a subbranch of the UNet to the segmentation result as input to a MLP for diagnosis. However, their segmentation and classification tasks are not trained end-to-end. (Zhou et al., 2019) proposes a method for collaborative learning of disease grading and lesion segmentation. They first perform a traditional semantic segmentation task with a small portion of annotated labels, and then they jointly train the segmenter and classifier for fine-tuning with an attention mechanism, which is applied on the latent features in the classification model, different from our method. Another difference is that for most existing multitask learning works, labels for each task are necessary, i.e., all tasks are supervised. Our method, on the other hand, only requires the labels of one task (classification), leading to a joint supervised/unsupervised scheme. Attention mechanism After firstly proposed for machine translation (Bahdanau et al., 2014), attention model became a prevalent concept in deep learning and leads to improved performance in various tasks in natural language processing and computer visions. (Vaswani et al., 2017) exploits self-attention in their encoder-decoder architecture to draw dependency between input and output sentences. (Wang et al., 2017) builds a very deep network with attention modules which generates attention-aware features for image classification and (Oktay et al., 2018) integrates attention gates into U-Net (Ronneberger et al., 2015) to highlight latent channels informative for segmentation task. When it comes to ECG, (Hong et al., 2019) proposes a multilevel knowledge guided attention network to discriminate Atrial fibrillation (AF) patients, making the learned models explainable at beat level, rhythm level, and frequency level, which is highly related to our work. Our method and theirs however are quite different in the way attention weights are derived and applied, as well as the output of attention network. First, in that work, the attention weights are obtained from the outputs of hidden layers, while ours are directly from the input. Second, domain knowledge about AF is needed to help the attention extraction, so the process is weakly supervised, while ours do not use any external information and is fully unsupervised. Third, their attention weights are applied to latent features in that work while ours are applied to the input for better interpretability. Finally, in that work, the input ECG segment is divided into equal-length segments in advance and the attention network output only indicates which segment contains the target arrhythmia. The quality highly depends on how the segment is divided, and it does not provide the exact locations of abnormal beats. On the other hand, our method directly locates the abnormal beats on the entire input ECG, offering potentially better interpretability and robustness. 3 METHOD 3.1 OVERVIEW OF THE FRAMEWORK Here we briefly introduce the workflow of our joint learning frameworks for supervised classification and unsupervised segmentation. Firstly, in this work, we choose to model the input signal as a one-dimensional signal D ∈ RM×N , where M is the number of leads and N is the length of the input ECG segment (number of samples over time). We then use a one-dimensional (1D) fully convolutional network called segmenter S to output a feature map L = S(D) ∈ RM×N , After that, we apply a pooling layer to generate window-style element-wise attention A ∈ RM×N , containing weights directly for every sample in the input ECG. The after-attention signalX = A D ∈ RM×N , where represents element-wise production, is then fed into a multi-layer CNN called classifier C, in which the outermost fully connected layer gives the prediction of the arrhythmia types. After training, the abnormal areas are highlighted in X , thus achieving the goal of segmenting abnormal beats from normal ones. Moreover, x, which indicates those beats that are highly associated with the differentiation task, also serves as an explanation for C’s decision. The architecture of our framework is illustrated in Fig 1. 3.2 SEGMENTER AND CLASSIFIER In most existing works, the attention map is fused with the deep features in a neural network. However, for our specific purposes of enhancing intepretability of the classification results as well as unsupervised segmentation, the best result would be obtained by directly applying it to the input signal D. In order to generate attention weights of the same length as D, we choose to utilize UNet (Ronneberger et al., 2015), a fully convolutional network highlighted by the skip connection on different stages. Encoding path extract features recursively and decoding path reconstruct the data as instructed by loss function. Note that the output of S has only 1 channel and we expand it channel-wise so that it matches the channel dimension of the ECG signal and at the same time each channel gets the same attention. The reason is that the 12 leads are measured synchronously and the abnormal beats occur at the same time across all the leads. Both recurrent neural networks (RNN) and CNN are candidate architectures for many arrhythmia classification works. RNN takes an ECG signal as sequential data and is good at dealing with the temporal relationship. CNN focuses on recognition of shapes and patterns in ECG, thus is less sensitive to the relative position of abnormal beats with respect to normal ones. Because abnormal beats may occur randomly among normal beats, we decide to use CNN as the backbone of our classifier. The detailed implementation of C is shown in 1(b). 3.3 POOLING FOR WINDOW-STYLE ATTENTION We do not use the output of the segmenter L as the attention map directly but instead perform a pooling with large kernel size first. This is out of considerations for both interpretability and performance. Regarding interpretability, it is desirable that each abnormal beat is uniformly highlighted, i.e., the attention weights should be almost constant and smooth for all the samples within each abnormal beat. Regarding performance, it is desirable that the attention map A does not distort the shape of abnormal beats after it is applied to the input X . Pooling layer is the easiest way to achieve this goal, functioning as a sliding window over multiple samples in an ECG signal for global information extraction. Max pooling outputs the same value around a local maximum, and average pooling reduces fluctuation by averaging over multiple samples. The kernel size cannot be too large, which may fuse sharp changes from neighboring areas and lead to the loss of local information. Therefore, deciding the proper pooling kernel size is essentially finding a balance between local information and global information preservation. Through experiments to be shown in Section 5.3, we find that setting the kernel size as nearly half the length of a normal beat yields the best balance between performance and interpretability. Padding of zeros on both sides of the segmenter output L is implemented to keep the length of the resulting attention map A after pooling to be the same as the input X . Meanwhile, the polarization of QRS complex is a critical feature for ECG signal, while the commonly used pooling layers, like max pooling and average pooling, fail to control the sign of output, leading to differentiation performance degradation. Rectified linear unit (ReLu) σ(lc,m) = max(0, lc,m), where c and m denote channel number and spatial position in L respectively, is usually performed as an activation function to add non-linearity to neural network for stronger representation ability. In this work, we can apply ReLU on L before pooling so that the all the weights in A generated by the following max pooling are positive. Alternatively, we replace average pooling with L2 norm pooling that takes square root of the L2 norm of input. In that case, ReLU is not needed. The two pooling implementations can be expressed as: ac,m max = PMAX(L)c,m = max {σ(lc,m), σ(lc,m+1), ..., σ(lc,m+k)} (1) ac,m L2 = PL2(L)c,m = √√√√m+k∑ i=m l2c,i (2) where, ac,m is the mth data point in channel c of A and k is the kernel size for the pooling. 3.4 JOINT LEARNING Compared to traditional segmentation network, our segmenter S does not give a prediction on every point in the input ECG, instead we generate an attention map A and distinguish heartbeats by the amplitude of weights in A. The segmentation result is reflected in the after-attention signal X . During training, with only annotation for the arrhythmia differentiation task, the segmentation task is actually unsupervised. Unlike clustering or mutual learning, the popular unsupervised segmentation methods, we train the segmenter and classifier in an end-to-end manner and the gradient of classification loss is backpropagated to S for updating how the signal is segmented. 4 EXPERIMENTS 4.1 DATASET Our experiments are conducted on two datasets from different sources. The first dataset is collected by a MAC5500 machine at a sample rate of 500Hz. They are all from patients diagnosed with PVC. Further catheter ablation test is performed to confirm the origins of PVC, so all the labels are accurate. For every patient, experienced cardiologists exam the long ECG record and grabs a ECG segment that contains the PVC arrhythmia with fixed length of 5000 samples. In the dataset, there are totally 508 segments, including 135 cases of left ventricle (LV) and 373 cases of right ventricle (RV). Moreover, within left ventricle patients, 91 are left ventricle outflow origins (LVOT) and among right ventricle patients 332 are right ventricle outflow origins (RVOT). It is of clinical interest to classify between LV and RV, and between LVOT and RVOT, so we will explore both problems in our experiments. We will release this dataset to the public, which will be the first for the challenging problem of PVC differentiation. The other dataset is derived from the public MIT-BIH Arrhythmia dataset (Moody & Mark, 2001), which includes 48 recordings of 47 patients, all sampled at 360 Hz. There are two leads for every records. All heart beats in those recordings are annotated by expert cardiologists and arrhythmia types include PVC, atrial premature beat (APB), left/right bundle branch block beat, etc. We preprocess those ECG signals into segment with standard 2000 samples. More specifically, we accumulatively add beat of interest to a segment until its length will exceeds 2000 if the next beat is added. Then we pad zeros for that segment to the target size. Among those segments, we focus on 356 segments with only normal beats and APB and 2627 segments with only normal and PVC beats. As for preprocessing, we adopt a series of filters and remove the high frequency noise and baseline drift. Besides, we apply normalization to each lead independently so that the voltage ranges are all the same for the 12 leads. 4.2 EXPERIMENT SETTING All our codes are based on the open source machine learning library PyTorch. Architecture details of our segmenter and classifier are shown in Figure 1. As for the classifier, inside each block are two serialized Conv + BN + ReLU combinations. The dimensions of weights in the two-layer perceptron are 128 × 128 and 128 × 2 respectively. We choose Adam algorithm (Kingma & Ba, 2014) as our optimizer with initial learning rate set to 0.00001. The training epoch is set to 120 as we observe lowest validation loss and highest accuracy can be achieved by then. The loss function for the classification is negative log likelihood loss and we add weights for different classes due to the imbalanced distribution of the dataset. We apply five-fold cross-validation with different classes evenly distributed between folds, and the average performance is reported. We implement our method with L2 norm pooling as well as a combination of ReLu and max pooling at the output of the segmenter, as discussed in Section 3.3. All the hyperparameters remain the same 5 RESULTS 5.1 COMPARISON OF PVC DIFFERENTIATION PERFORMANCE The metrics we select for comparison include overall accuracy, specificity, sensitivity, and AUC (area under curve) of ROC (Receiver Operating Characteristic) curve. We evaluate the performance of all the methods on two tasks: differentiating PVC originating in LV and RV ((RV,LV) task), as well as PVC originating in LVOT and RVOT ((RVOT, LVOT) task). For (RV, LV) task, the specificity and sensitivity are calculated with regards to RV, for (RVOT, LVOT), it is with regards to RVOT, and it is PVC for (PVC, APB) task. For all tasks, we calculate the AUC of ROC curve for each class and record the average value. Table 1 lists the results for three baseline methods and our methods with two different pooling. A few meaningful observations can be made out of the table. Firstly, in general our methods show higher score in almost all benchmarks including accuracy, specificity, sensitivity and AUC, than the baseline methods. This proves that our attention mechanism indeed improves the classifier’s capability of PVC origin classification. Secondly, using L2 norm pooling has better performance than using the combination of ReLU and max pooling, which implies the limitation of ReLU which may lose information in negative values. In contrast, L2 norm pooling preserves the negative information. Finally, the cascaded segmenter and classifier method, which has the same number of layers as our methods, has poor performance. This confirms that the better classification performance of our methods actually comes from the attention mechanism instead of deeper architecture. Actually, from Figure 2 we can see that there is a large gap between the training loss and the validation loss for the cascaded segmenter and classifier method after several epochs, suggesting apparent overfitting. Moreover, the benefits of adding a segmenter to the MIT-BIH dataset seems not large, but it is due to that distinguishing between PVC and APB is much less challenging than differentiating various origins of PVC. Also, the baseline already achieves good performance. 5.2 EVALUATION OF SEGMENTATION AND INTERPRETABILITY Three visual examples of the segmentation results are shown in Figure 3. From the figure we can see that after applying the attention map to the original ECG signal, the location of the abnormal beats can be easily identified. We design an independent and blind grade study by an experienced cardiologist to qualitatively evaluate our segmenter’s ability to detect abnormal beats for the (RVOT, LVOT) task. In general, qualitative evaluation is widely used in attention mechanism related works due to simplicity and visualization (Hu, 2019). It is also most suitable to judge the intepretability perspective of the results. Focusing on the after-attention signal X , we categorize the segmentation result into three classes by the contrast between abnormal beats and normal beats, as shown in Figure 4. Class I: all normal beats are eliminated, and all abnormal beats are kept. Best interpretability is attained. Class II: some normal beats still remain, and all abnormal beats are kept. In this case, interpretability is reduced but all abnormal beats can still be identified in X . Class III: there is no significant difference between abnormal beats and normal beats. There is little interpretability. We randomly select the after-attention signals of 100 ECG segments and the blind grade study result shows that the number of cases in the three classes are 50:27:23 (Class I: Class II: Class III), which implies superior performance of our segmenter. In practice, these segmentation results provide cardiologists quick understanding of why the prediction is made. 5.3 INFLUENCE OF KERNEL SIZE Fig 5 shows the comparison of classification performance and segmentation result of our framework with different kernel sizes in the L2 norm pooling layer at the output of the segmenter regarding the (RVOT, LVOT) task. We can see apparent degradation of accuracy, specificity, and AUC when the kernel size is increased to 300 while the performance difference between kernel size 100 and 200 is minor. Regarding the segmentation result, additional grade studies are conducted on kernel size 100 and kernel size 300. The ratios between the number of cases in the three classes when the kernel size is 200 is close to those when kernel size is 300, showing comparable ability of abnormal beats detection. There is a much higher number of class III cases for kernel size 100, suggesting that too small kernels may suffer from poor segmentation and low intrepretability in accordance with the analysis in Section 3.3. After weighing interpretbility and performance, we choose the kernel size of 200. 6 CONCLUSION AND DISCUSSION In this paper, we propose a novel framework combining unsupervised abnormal beats segmentation and supervised arrhythmia differentiation. The key to the multitask learning is applying an attention map generated by a segmenter directly to input data before the classification task. In addition, we perform a large-kernel pooling layer to constrain the shape of attention map for better performance and easier interpretability. We use premature ventricular contraction differentiating, one of the most challenging problems in arrhythmia classification as a case study to evaluate effectiveness of our method. On one hand, experiment result demonstrates better accuracy with the help of attention map. On the other hand, we observe obvious discrimination between abnormal beats and normal beats from after-attention signal. Indicating enhanced interpretability in clinical practice. In the future, we expect to extend our method to high-dimension data such as images and videos. In our opinion, the difficulties in applying our framework on 2-D /3-D data are more complicated background information and the need for fine-grained constraint on the attention map shape. When doing arrhythmia classification, the “background” in ECG signal is just normal beats. As for image classification, the “background” can be more complex, like birds flying among flowers and cars driving through streets, etc. Learning the difference between target objects and environment may be harder for the segmenter if without labels. On the other hand, the target objects can have higher variations in terms of size, shape and texture, even within the same class, which requires a more elaborate design of constraints on the attention map shape.
1. What is the focus of the paper, and how does it contribute to the field of Premature Ventricular Contraction (PVC) differentiation and segmentation? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its application in the clinical environment? 3. Do you have any concerns regarding the scope and novelty of the paper's content? 4. How does the reviewer assess the relevance and sufficiency of the experimental results, including both quantitative and qualitative evaluations? 5. Are there any specific aspects or techniques in the paper that require further clarification or detail, such as data preprocessing or attention maps?
Review
Review This paper proposes a deep neural network for Premature Ventricular Contraction (PVC) differentiation and segmentation from electrocardiogram (ECG) signals. The network is jointly trained as a segmenter and a classifier with a multitask learning manner. Differentiation is achieved by the classifier, and segmentation is achieved by pooling for window-style attention from segmenter’s output. Quantitative experiments show better performance than baselines on differentiation tasks. Qualitative experiments show the effectiveness of segmentation tasks. The results look interesting, and it might have a broader impact on practical usage for AI models in the clinical environment. However, my concerns are: The topic seems too narrow for the computer science community. More likely a paper of the biomedical engineering community or computing cardiology community. The proposed method also lacks in-depth technical/theoretical analysis; thus the paper novelty is limited. The related works include multitask learning and attention mechanisms. But (image) segmentation works are also worth (or even more) investigating. Just a simple modification of image segmentation neural networks (such as Conv2D -> Conv1D) can make them suitable for ECG segmentation tasks. For the evaluation of segmentation, only several cases of qualitative evaluations are not convincing. At least, a comprehensive user study by a community of cardiologists is needed. Some questions: Could you provide more details about data preprocessing? Which filters do you use? What are the cut-off frequencies for high-pass filter and low-pass filter? In figure 3, are there duplicate attention maps in every column?
ICLR
Title MetaPix: Few-Shot Video Retargeting Abstract We address the task of retargeting of human actions from one video to another. We consider the challenging setting where only a few frames of the target is available. The core of our approach is a conditional generative model that can transcode input skeletal poses (automatically extracted with an off-the-shelf pose estimator) to output target frames. However, it is challenging to build a universal transcoder because humans can appear wildly different due to clothing and background scene geometry. Instead, we learn to adapt – or personalize – a universal generator to the particular human and background in the target. To do so, we make use of meta-learning to discover effective strategies for on-the-fly personalization. One significant benefit of meta-learning is that the personalized transcoder naturally enforces temporal coherence across its generated frames; all frames contain consistent clothing and background geometry of the target. We experiment on in-the-wild internet videos and images and show our approach improves over widely-used baselines for the task. N/A We address the task of retargeting of human actions from one video to another. We consider the challenging setting where only a few frames of the target is available. The core of our approach is a conditional generative model that can transcode input skeletal poses (automatically extracted with an off-the-shelf pose estimator) to output target frames. However, it is challenging to build a universal transcoder because humans can appear wildly different due to clothing and background scene geometry. Instead, we learn to adapt – or personalize – a universal generator to the particular human and background in the target. To do so, we make use of meta-learning to discover effective strategies for on-the-fly personalization. One significant benefit of meta-learning is that the personalized transcoder naturally enforces temporal coherence across its generated frames; all frames contain consistent clothing and background geometry of the target. We experiment on in-the-wild internet videos and images and show our approach improves over widely-used baselines for the task. 1 INTRODUCTION One of the hallmarks of human intelligence is the ability to imagine. For example, given an image of a never-before-seen person, one can easily imagine them performing different actions. To do so, we make use of years of experience watching humans act and interact with the world. We implicitly encode the rules of physical transformations of humans, objects, clothing and so on. Crucially, we effortlessly adapt or retarget those universal rules to a specific human and environment - a child on a playground will likely move differently than an adult walking into work. Our goal in this work is to develop models that similarly learn to generate human motions by specializing universal knowledge to a particular target human and target environment, given only a few samples of the target. It is attractive to tackle such video generation tasks using the framework of generative (adversarial) neural networks (GANs). Past work has cast the core computational problem as one of conditional image generation where input source poses (automatically extracted with an off-the-shelf pose estimator) are transcoded into image frames (Balakrishnan et al., 2018; Siarohin et al., 2018; Ma et al., 2017). However, it is notoriously challenging to build generative models that are capable of synthesizing diverse, in-the-wild imagery. Notable exceptions make use of massively-large networks trained on large-scale compute infrastructure (Brock et al., 2019). However, modestly-sized generative networks perform quite well at synthesis of targeted domains (such as faces (Bansal et al., 2018) or facades (Isola et al., 2017)). A particularly successful approach to generating from pose-to-image is training of specialized – or personalized – models to particular scenes. These often require large-scale target datasets, such as 20 minutes of footage in a target lab setting (Chan et al., 2018) The above approaches make use of personalization as an implicit but crucial ingredient, by on-the-fly training of a generative model tuned to the particular target domain of interest. Often, personalization is operationalized by fine-tuning a generic model on the specific target frames of interest. Our key insight is recasting personalization as an explicit component of a video-retargeting engine, allowing us to make use of meta-learning to learn how best to fine-tune (or personalize) a generic model to a particular target domain. We demonstrate that (meta)learning-to-fine-tune is particularly effective in the few-shot regime, where few target frames are available. From a technical perspective, one of our contributions is extending meta-learning to GANs, which is nontrivial because both a generator and discriminator need to be adversarially fine-tuned. To that end, we propose MetaPix, a novel approach to personalization for video retargeting. Our formulation treats personalization as a few-shot learning problem, where the task is to adapt a generic generative model of human actions to a specific person given a few samples of their appearance. Our formulation is agnostic to the actual generative model used, and is compatible with both poseconditioned transfer (Balakrishnan et al., 2018) or generative (Chan et al., 2018) approaches. Taking inspiration from the recent successes of meta-learning approaches for few-shot tasks (Nichol et al., 2018; Finn et al., 2017), we propose a novel formulation by adapting the popular first-order metalearning algorithm Reptile (Nichol et al., 2018) for jointly learning initial weights for both the generator and discriminator. Hence, our model is optimized for efficient adaptation (personalization), given only a few samples and on a computational budget, and obtains stronger performance compared to a model not optimized in this form. Interestingly, we find this personalized model naturally enforces strong temporal coherence in the generated frames, even though it is not explicitly optimized for that task. 2 RELATED WORK Deep generative modeling. There has been a growing interest in using deep networks for generative modeling of visual data, particularly images. Popular techniques include Variational Auto-Encoders (VAEs) (Kingma & Welling, 2014) and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014). Particularly, GAN based techniques have shown strong performance for various tasks such as conditional image generation (Brock et al., 2019), image-to-image translation (Isola et al., 2017; Wang et al., 2018b; Zhu et al., 2017; Balakrishnan et al., 2018), unsupervised translation (Zhu et al., 2017) and domain adaptation (Hoffman et al., 2018). More recently, these techniques have been extended to video tasks, such as generation (Vondrick et al., 2016), future prediction (Finn et al., 2016) and translation (Bansal et al., 2018; Wang et al., 2018a). Our work explores generative modeling from a few samples, with our main focus being the task of video translation. There has been some prior work in this direction (Zakharov et al., 2019), though is largely limited to faces and portrait images. Motion transfer and video retargeting. This refers to the task of driving a video of a person or a cartoon character given another video (Gleicher, 1998). While there exist some unsupervised techniques (Bansal et al., 2018) to do so, most successful approaches for articulated bodies involve using pose as an intermediate supervision. Recently, there have been two broad categories of approaches that have been employed for this task: 1) Learning to transform an image into another, given pose as input, either in 2D (Zhou et al., 2019; Balakrishnan et al., 2018; Siarohin et al., 2018; Ma et al., 2017) or 3D (Liu et al., 2018; Neverova et al., 2018; Walker et al., 2017). And 2) Learning a model to directly generate images given a pose as input (or, Pose2Im) (Chan et al., 2018). The former approaches tend to be more sophisticated, separately generating foreground and background pixels, and tend to perform slightly better than the latter. However, they typically learn a generic model across datasets that can transfer from a single frame, whereas the latter can learn a more holistic reconstruction by learning a specific model for a video. Our approach is complementary to such transfer approaches, and be applied on top of either, as we discuss in Section 3. Few-shot learning. Low shot learning paradigms attempt to learn a model using very small amount of training data (Thrun, 1996), typically for visual recognition tasks. Classical approaches build generative models that share priors across the various categories (Fei-Fei et al., 2006; Salakhutdinov et al., 2012). Another category of approaches attempt to learn feature representations invariant to intra-class variations by using hallucinated data (Hariharan & Girshick, 2017; Wang et al., 2018c) or specialized training procedures/loss functions (Wang & Hebert, 2016a; Bart & Ullman, 2005). More recently, it has been framed as a ‘learning-to-learn’ or a meta-learning problem. The key idea is to directly optimize the model, for the eventual few-shot adaptation task, where the model is finetuned using a few examples (Finn et al., 2017). Alternatively, it has also been explored in form of directly predicting classifier weights (Bertinetto et al., 2016; Wang & Hebert, 2016b; Wang et al., 2017; Misra et al., 2017). Meta Learning. The goal of metalearning is to learn models that are good at learning, similar to how humans are able to quickly and efficiently learn to do a new task. Many different approaches have been explored to that end. One direction involves learning weights through recurrent networks like LSTM (Hochreiter et al., 2001; Santoro et al., 2016; Duan et al., 2016). More commonly, meta-learning has been used as a way to learn an initialization for a network, that is finetuned at test time on a new task. A popular approach in this direction is MAML (Finn et al., 2017), where the parameters are directly optimized for the test time performance of the task it needs to adapt to. This is performed by backpropagating through the finetuning process by computing second order gradients. They and others (Andrychowicz et al., 2016) have also proposed first-order methods like FOMAML that forego the need to compute second order gradients, making it more efficient at empirically small drop in performance. However, most of these works still tend to have the requirement of SGD to be used as the task optimizer. A recently proposed meta-learning algorithm, Reptile (Nichol et al., 2018), forgoes that constraint by proposing a much simpler first order meta learning algorithm that is compatible with any black box optimizer. 3 OUR APPROACH We now describe MetaPix in detail. To reiterate, our goal is to learn a generic model of human motion, parameterized by θ, that can quickly and efficiently be personalized for a specific person. We define speed and efficiency requirements in terms of two parameters: computation/iterations (T ) and the number samples required for personalization (K), respectively. We now describe the base architecture, MetaPix training setup, and the implementation details. Base retargeting architecture. We build upon popular video retargeting architectures. Notably, there are two common approaches in literature:1) Learning a transformation from one image to another, conditioned on the pose (Zhou et al., 2019; Balakrishnan et al., 2018) and 2) Learning a mapping from pose to RGB (Pose2Im), like (Chan et al., 2018). Both obtain strong performance and amenable to the speed and efficiency constraints we are interested in. For example in K-shot setting (i.e. to learn a model using K frames), one can train the Pose2Im mapping using the K frames in the former case, or use the CK2 pairs from K frames to learn a transformation function from one of the Algorithm 1 Meta-learning for video re-targeting for the Pose2Im setup. Initialize θD, θG from pretrained weights for iteration = 1, 2, ... do Sample K pose image pairs from the same shot randomly Compute θ̃D, θ̃G = Pix2PixHDTK(θD, θG), for K images and T iterations Update θD = θD − (θ̃D − θD) Update θG = θG − (θ̃G − θG) end for K images to another in the latter case. They are also both compatible with our MetaPix optimization discussed next. Pose2Im (Chan et al., 2018) approaches essentially build upon image-to-image translation methods (Isola et al., 2017; Wang et al., 2018b), where the input is a rendering of the body joints, and the output is an RGB image. The model consists of an encoder-decoder style generator G. It is trained using a combination of perceptual reconstruction losses (Johnson et al., 2016), implemented using an L1 penalty over VGG (Simonyan & Zisserman, 2015) features and discriminator losses, where we train a separate discriminator network D that is trained to differentiate the generated images from real images. The reconstruction loss forces it to be close to the ground truth, potentially leading to blurry outputs. Adding the discriminator helps fix that, as it forces the output onto the manifold of real images. Given its strong performance, we use Pix2PixHD (Wang et al., 2018b) as our base architecture for Pose2Im. For brevity, we skip a complete description of the model architecture, and refer the reader to (Wang et al., 2018b) for more details. Pose Transfer (Balakrishnan et al., 2018; Zhou et al., 2019), on the other hand, takes a source image of a person and a target pose, and generates an image of the source person in that target pose. These approaches typically segment the limbs, transform their position as in the target pose, and generate the target image by combining the transformed limbs and segmented background by using a generative network like a U-Net (Ronneberger et al., 2015). These approaches can leverage learning to move pixels instead of having to generate color and background image from a learned representation. We utilize the Posewarp method (Balakrishnan et al., 2018) as our base Pose Transfer architecture due to available implementation. MetaPix. MetaPix builds upon the base retargeting architecture by optimizing it for few-shot and fast adaptation for personalization. We achieve that by taking inspiration from the literature on few-shot learning, where meta-learning has shown promising results. We use a recently introduced first-order meta-learning technique, Reptile (Nichol et al., 2018). As compared to the more popular technique, MAML (Finn et al., 2017), it is more efficient as it does not compute a second gradient and is amenable to work with arbitrary optimizers as it does not need to backpropagate through the optimization process. Given that GAN architectures are hard to optimize, Reptile suits our purposes of its ability to use Adam (Kingma & Ba, 2015), the default optimizer for Pix2PixHD, as our task optimizer. Figure 2 illustrates the high level idea of our approach, which we describe in detail next. We start with either a Pose2Im or a Pose Transfer trained base model. We then finetune this model as described in Algorithm 1. Note that Pix2PixHD is based on a GAN, so has two network weights to be optimized, the generator (θG) and discriminator (θD). In each meta-iteration, we sample a task: in our case a set of K frames from a new video to personalize to. We then finetune the current model parameter to that video over T iterations, and update the model parameters in the direction of the personalized parameters using a meta learning rate . We optimize both θD and θG jointly at each step. Note that Posewarp employs a more complicated two-stage training procedure, and we metalearn only the first stage (which has no discriminator) for simplicity. Implementation Details. We implement MetaPix for the Pose2Im base model by building upon a public Pix2PixHD implementation1 in PyTorch, and perform all experiments on a 4 TITAN-X or GTX 1080Ti GPU node. We follow the hyperparameter setup as proposed in (Wang et al., 2018b). We represent the pose using a multi-channel heatmap image, and input and output are 512× 512px RGB 1https://github.com/NVIDIA/pix2pixHD/ images. The generator consists of 16 convolutional and deconvolutional layers, and is trained with a equally weighted combination of GAN, Feature Matching, and VGG losses. Initially, we pretrain the model on a large corpus of videos to learn a generic Pose2Im model as described in Section 4. During this pretraining stage, the model is trained on all of the training frames for 10 epochs using learning rate of 0.0002 and batch size of 8 distributed over the 4 GPUs. We experimented with multiple learning rates including 0.2, 0.02, 0.002; however, we observed that higher learning rates caused the training to diverge. When finetuning for personalization, given K frames and a computational budget T , we train the first T2 iterations using a constant learning rate of 0.0002, and the remaining iterations using a linear decay to 0, following (Wang et al., 2018b). The batch size is fixed to 8, and for K < 8, we repeat the frames to get 8 images for the batch. For the metalearning, we set the meta learning rate, = 1 with a linear decay to 0, and train 300 meta-iterations. We also experiment with meta learning rate, = 0.1, however, was much slower to converge. To potentially stabilize metatraining, we experiment with differing numbers of updates to the generator and discriminator during iterations of Alg. 1, as well as simplified objective functions. Recall that the GAN loss adds significant complexity due to the presence of a discriminator that need also be adversarially finetuned. In total, our metalearning takes 1 day of training time on 4 GPUs. For the Pose Transfer base model, we apply MetaPix in a similar fashion on top of Posewarp2, using the author provided pretrained weights. We will release the MetaPix source code for details. 4 EXPERIMENTS We now experimentally evaluate MetaPix. We start by describing the datasets used and evaluation metrics. We then describe our base Pose2Im and Pose Transfer setup, followed by training that model using MetaPix. Finally, we analyze and ablate the various design choices in MetaPix. 4.1 DATASETS AND EVALUATION We train and evaluate our approach on in-the-wild internet videos. Due to the lack of a standard benchmark for such retargeting tasks, we use the dataset as described in (Zhou et al., 2019) as our test set. These are a set of 8 videos downloaded from youtube, each 4-12 minutes long. We refer the reader to Figure 1 in (Zhou et al., 2019) for sample frames from this dataset. Additionally, we collect a set of 10 more dance videos from YouTube (distinct from the above 8), as our pre-training and meta-learning corpus. We provide the list of YouTube video IDs for both in the supplementary. Our models are only trained on these videos, and videos from (Zhou et al., 2019) are only used for personalization (using K frames) and evaluation. Figure 3 shows sample frames from these newly collected videos. Evaluation and Metrics: Similar to (Zhou et al., 2019), we split each of the 8 test videos into a training and test sequence in 0.85:0.15 ratio, and sampleK training and 2000 test frames from the test sequence. We use the same metrics as in (Zhou et al., 2019) for ease of comparison: Mean Squared Error (MSE), Structured Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR). Each of these are averaged over the 2000 test frames from each of the 8 test videos. To compare our baselines and our method, pose retargeting as a task aims to minimize MSE and maximize SSIM and PSNR. 4.2 EVALUATING METAPIX We start by building our baseline retargeting model, based on Pix2PixHD (Wang et al., 2018b; Chan et al., 2018). To get a sense of the upper bound performance of our model, we train the model for each test video with no constraints on T or K, starting from the model pre-trained on our train set. Specifically, we use all the frames from the first 85% of each video, and train it for 10 epochs. We report the performance of this model in first section of Table 1 and show sample generations in second column of Figure 4. Since this model gets strong quantitative and qualitative performance, we stick with it as our base retargeting architecture through the rest of the experiments. We also employ a baseline retargeting model based on Posewarp for evaluation, but we focus on Pose2Im for further experimentation due to its relative simplicity. Now we evaluate the performance of our model in constrained settings, where we want to learn to personalize given a few samples and in a constrained computational budget. Hence, we use a pretrained model on train set and a random model, and we personalize them by finetuning on each test video. As Table 1 shows, applying constraints leads to a drop in performance in all methods, as expected from using only 5 frames finetuned over 20 iterations. Finally, we compare that to the MetaPix model: in that case, we start from the pre-trained model, and do meta-learning on top of those parameters to optimize them for the transfer task as described in Section 3. That leads to a significant improvement over the pretrained model, showing the strength of MetaPix for this task. In Figure 43, we visualize the predictions using the unconstrained model, as well as the constrained models trained using MetaPix and without, i.e. with simple pretraining. It is interesting to note that the meta-learned model is able to adapt to the color of the clothing and the background much better than a pretrained model, given the same frames for personalization. This reinforces MetaPix is a much better initialization for few-shot personalization than directly finetuning from a generic pretrained model. We further explore this quality of coherence in the next section. 4.3 ABLATIONS We now ablate the key design choices in our MetaPix formulation. One of the strengths of our formulation is the explicit control on the supervision provided and computation the model is allowed to perform, and depending on the use-case, those parameters can easily be tweaked. We explore the effect of MetaPix on those parameters next on the Pose2Im base retargeting architecture. Variation in K: We vary the amount of supervision for personalization, K, and evaluate its effect on the metrics in Figure 5. We compare the following models: a) Randomly initialized, b) Pretrained 2https://github.com/balakg/posewarp-cvpr2018 3Video visualization at https://youtu.be/NlUmsd9aU-4 on the train set, c) Trained using MetaPix for each value of K and tested with the same K, and d) Trained using MetaPix forK = 5 and tested at each value ofK. The last one tests the generalizability of MetaPix to different values of K at train and test time. We find that the MetaPix trained models consistently perform better than a simple pretrained model on all metrics. Notably, the model only trained for K = 5 is still able to obtain strong performance at different K values, showing the MetaPix trained model can generalize beyond the specific setup it is optimized for. The gap between the MetaPix trained model and the pretrained model tends to reduce with higher K, which is as expected: more data for personalization would likely reduce the importance of the initialization. However, there is a clear and significant gap for lower values of K, showing that MetaPix is highly effective for retargeting from few samples. In fact, we find that meta-learning is most effective for K = 1, corresponding to the challenging scenario of video-to-image retargeting. Variation in T : Similar to variation in supervision, we experiment with varying the computation, or T , in Figure 6. We experiment with a similar set of baselines as in the case for K, and again observe that the MetaPix model consistently outperforms random initialization or pretraining on all metrics. Also, we see similar generalizability, as the model metatrained for T = 20 is able to perform well for other T values at test time too. The ability for MetaPix to generalize across K and T implies cost-effective strategies for training. The computational cost for training a meta-learner is dominated by fine-tuning, which scales linearly with K and T . Training with smaller values of both can result in significant speedups – up to 10× in our experiments. Variation of meta learning rate : We also experimented with changing the meta learning rate. At = 0.1 (K = 5, T = 200), we obtained SSIM=0.47, similar to what the pretrained model gets. Using our default = 1.0, improves performance to 0.51. Hence, a higher meta learning rate was imperative to see improvements with MetaPix. Only training the generator: We apply Reptile in a GAN setting, where we jointly meta-optimize two networks. We also experimented with freezing one of the networks, specifically the discriminator, to the weights learned during pretraining. For our K = 5, T = 200, = 1.0 setup, we obtain similar performance as optimizing both, suggesting that a ‘universal’ discriminator might suffice for meta-learning on GANs. Visualizing the dynamics of personalization: In order to examine the process of personalization, we visualize models obtained during iterations of finetuning, at 10, 20, 40, 80 and 200 iterations for 5 random test pose-image pairs. We compare both the pretrained and metalearned model, trained for k = 5, T = 200. Figure 74 shows images generated by these intermediate iterations. Both methods learn clothing details and background colors after 20 iterations. Interestingly, MetaPix produces images that are temporally coherent, even upon initialization, while the pretrained baseline produces images whose background and clothing vary with pose. This more coherent initialization appears to translate to more coherent generated images after personalization. 5 CONCLUSION We have explored the task of quickly and efficiently retargeting human actions from one video to another, given a limited number of samples from the target domain. We formalize this as a few-shot personalization problem, where we first learn a generic generative model on large amounts of data, and then specialize it to a small amount of target frames via finetuning. We further propose a novel meta-learning based approach, MetaPix, to learn this generic model in a way that is more amenable to personalization via fine-tuning. To do so, we repurpose a first-order meta-learning algorithm, Reptile, to adversarially meta-optimize both the generator and discriminator of a generative adversarial network. We experiment with it on in-the-wild YouTube videos, and find that MetaPix outperforms widely-used approaches for pretraining, while generating temporally coherent videos. Acknowledgements: This research is based upon work supported in part by NSF Grant 1618903, the Intel Science and Technology Center for Visual Cloud Systems (ISTC-VCS), and Google.
1. What is the focus of the paper regarding the task it addresses? 2. What are the strengths of the proposed approach, particularly its technical soundness? 3. What are the weaknesses of the paper, especially regarding its novelty and contributions? 4. Do you have any concerns or suggestions regarding the experiments and their results? 5. Are there any questions you have after reading the review that you'd like further clarification on?
Review
Review This paper proposes a novel and interesting task that learn to retarget human actions with few-shot samples. The overall pipeline is built by applying meta-learning strategy on pre-trained retargeting module. It follows a conditional generator and discriminator structure that leverages few-shot frames to retarget the action of source video. The approach is technically sound. The evaluations are compared with two baseline methods, Pix2PixHD and Posewarp. Their evaluations are satisfactory and convincing, the results demonstrate some improvements over baseline models regarding both the selected metrics and the visualizing performance. Though the proposed problem is novel and somewhat interesting, there are also several weaknesses of this work: - The novelty of methodology is somewhat limited. It is more about merging several state-of-the-art modules in different tasks to tackle the few-shot retargeting problem. Though efforts may be needed to make the pipeline work, the overall contribution is not significant. - The improvement obtained with proposed MetaPix module is not significant in few-shot setting according to Table 1. Additionally, could the authors provide some visualizing results for different K number, which would be interesting for analysis.
ICLR
Title MetaPix: Few-Shot Video Retargeting Abstract We address the task of retargeting of human actions from one video to another. We consider the challenging setting where only a few frames of the target is available. The core of our approach is a conditional generative model that can transcode input skeletal poses (automatically extracted with an off-the-shelf pose estimator) to output target frames. However, it is challenging to build a universal transcoder because humans can appear wildly different due to clothing and background scene geometry. Instead, we learn to adapt – or personalize – a universal generator to the particular human and background in the target. To do so, we make use of meta-learning to discover effective strategies for on-the-fly personalization. One significant benefit of meta-learning is that the personalized transcoder naturally enforces temporal coherence across its generated frames; all frames contain consistent clothing and background geometry of the target. We experiment on in-the-wild internet videos and images and show our approach improves over widely-used baselines for the task. N/A We address the task of retargeting of human actions from one video to another. We consider the challenging setting where only a few frames of the target is available. The core of our approach is a conditional generative model that can transcode input skeletal poses (automatically extracted with an off-the-shelf pose estimator) to output target frames. However, it is challenging to build a universal transcoder because humans can appear wildly different due to clothing and background scene geometry. Instead, we learn to adapt – or personalize – a universal generator to the particular human and background in the target. To do so, we make use of meta-learning to discover effective strategies for on-the-fly personalization. One significant benefit of meta-learning is that the personalized transcoder naturally enforces temporal coherence across its generated frames; all frames contain consistent clothing and background geometry of the target. We experiment on in-the-wild internet videos and images and show our approach improves over widely-used baselines for the task. 1 INTRODUCTION One of the hallmarks of human intelligence is the ability to imagine. For example, given an image of a never-before-seen person, one can easily imagine them performing different actions. To do so, we make use of years of experience watching humans act and interact with the world. We implicitly encode the rules of physical transformations of humans, objects, clothing and so on. Crucially, we effortlessly adapt or retarget those universal rules to a specific human and environment - a child on a playground will likely move differently than an adult walking into work. Our goal in this work is to develop models that similarly learn to generate human motions by specializing universal knowledge to a particular target human and target environment, given only a few samples of the target. It is attractive to tackle such video generation tasks using the framework of generative (adversarial) neural networks (GANs). Past work has cast the core computational problem as one of conditional image generation where input source poses (automatically extracted with an off-the-shelf pose estimator) are transcoded into image frames (Balakrishnan et al., 2018; Siarohin et al., 2018; Ma et al., 2017). However, it is notoriously challenging to build generative models that are capable of synthesizing diverse, in-the-wild imagery. Notable exceptions make use of massively-large networks trained on large-scale compute infrastructure (Brock et al., 2019). However, modestly-sized generative networks perform quite well at synthesis of targeted domains (such as faces (Bansal et al., 2018) or facades (Isola et al., 2017)). A particularly successful approach to generating from pose-to-image is training of specialized – or personalized – models to particular scenes. These often require large-scale target datasets, such as 20 minutes of footage in a target lab setting (Chan et al., 2018) The above approaches make use of personalization as an implicit but crucial ingredient, by on-the-fly training of a generative model tuned to the particular target domain of interest. Often, personalization is operationalized by fine-tuning a generic model on the specific target frames of interest. Our key insight is recasting personalization as an explicit component of a video-retargeting engine, allowing us to make use of meta-learning to learn how best to fine-tune (or personalize) a generic model to a particular target domain. We demonstrate that (meta)learning-to-fine-tune is particularly effective in the few-shot regime, where few target frames are available. From a technical perspective, one of our contributions is extending meta-learning to GANs, which is nontrivial because both a generator and discriminator need to be adversarially fine-tuned. To that end, we propose MetaPix, a novel approach to personalization for video retargeting. Our formulation treats personalization as a few-shot learning problem, where the task is to adapt a generic generative model of human actions to a specific person given a few samples of their appearance. Our formulation is agnostic to the actual generative model used, and is compatible with both poseconditioned transfer (Balakrishnan et al., 2018) or generative (Chan et al., 2018) approaches. Taking inspiration from the recent successes of meta-learning approaches for few-shot tasks (Nichol et al., 2018; Finn et al., 2017), we propose a novel formulation by adapting the popular first-order metalearning algorithm Reptile (Nichol et al., 2018) for jointly learning initial weights for both the generator and discriminator. Hence, our model is optimized for efficient adaptation (personalization), given only a few samples and on a computational budget, and obtains stronger performance compared to a model not optimized in this form. Interestingly, we find this personalized model naturally enforces strong temporal coherence in the generated frames, even though it is not explicitly optimized for that task. 2 RELATED WORK Deep generative modeling. There has been a growing interest in using deep networks for generative modeling of visual data, particularly images. Popular techniques include Variational Auto-Encoders (VAEs) (Kingma & Welling, 2014) and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014). Particularly, GAN based techniques have shown strong performance for various tasks such as conditional image generation (Brock et al., 2019), image-to-image translation (Isola et al., 2017; Wang et al., 2018b; Zhu et al., 2017; Balakrishnan et al., 2018), unsupervised translation (Zhu et al., 2017) and domain adaptation (Hoffman et al., 2018). More recently, these techniques have been extended to video tasks, such as generation (Vondrick et al., 2016), future prediction (Finn et al., 2016) and translation (Bansal et al., 2018; Wang et al., 2018a). Our work explores generative modeling from a few samples, with our main focus being the task of video translation. There has been some prior work in this direction (Zakharov et al., 2019), though is largely limited to faces and portrait images. Motion transfer and video retargeting. This refers to the task of driving a video of a person or a cartoon character given another video (Gleicher, 1998). While there exist some unsupervised techniques (Bansal et al., 2018) to do so, most successful approaches for articulated bodies involve using pose as an intermediate supervision. Recently, there have been two broad categories of approaches that have been employed for this task: 1) Learning to transform an image into another, given pose as input, either in 2D (Zhou et al., 2019; Balakrishnan et al., 2018; Siarohin et al., 2018; Ma et al., 2017) or 3D (Liu et al., 2018; Neverova et al., 2018; Walker et al., 2017). And 2) Learning a model to directly generate images given a pose as input (or, Pose2Im) (Chan et al., 2018). The former approaches tend to be more sophisticated, separately generating foreground and background pixels, and tend to perform slightly better than the latter. However, they typically learn a generic model across datasets that can transfer from a single frame, whereas the latter can learn a more holistic reconstruction by learning a specific model for a video. Our approach is complementary to such transfer approaches, and be applied on top of either, as we discuss in Section 3. Few-shot learning. Low shot learning paradigms attempt to learn a model using very small amount of training data (Thrun, 1996), typically for visual recognition tasks. Classical approaches build generative models that share priors across the various categories (Fei-Fei et al., 2006; Salakhutdinov et al., 2012). Another category of approaches attempt to learn feature representations invariant to intra-class variations by using hallucinated data (Hariharan & Girshick, 2017; Wang et al., 2018c) or specialized training procedures/loss functions (Wang & Hebert, 2016a; Bart & Ullman, 2005). More recently, it has been framed as a ‘learning-to-learn’ or a meta-learning problem. The key idea is to directly optimize the model, for the eventual few-shot adaptation task, where the model is finetuned using a few examples (Finn et al., 2017). Alternatively, it has also been explored in form of directly predicting classifier weights (Bertinetto et al., 2016; Wang & Hebert, 2016b; Wang et al., 2017; Misra et al., 2017). Meta Learning. The goal of metalearning is to learn models that are good at learning, similar to how humans are able to quickly and efficiently learn to do a new task. Many different approaches have been explored to that end. One direction involves learning weights through recurrent networks like LSTM (Hochreiter et al., 2001; Santoro et al., 2016; Duan et al., 2016). More commonly, meta-learning has been used as a way to learn an initialization for a network, that is finetuned at test time on a new task. A popular approach in this direction is MAML (Finn et al., 2017), where the parameters are directly optimized for the test time performance of the task it needs to adapt to. This is performed by backpropagating through the finetuning process by computing second order gradients. They and others (Andrychowicz et al., 2016) have also proposed first-order methods like FOMAML that forego the need to compute second order gradients, making it more efficient at empirically small drop in performance. However, most of these works still tend to have the requirement of SGD to be used as the task optimizer. A recently proposed meta-learning algorithm, Reptile (Nichol et al., 2018), forgoes that constraint by proposing a much simpler first order meta learning algorithm that is compatible with any black box optimizer. 3 OUR APPROACH We now describe MetaPix in detail. To reiterate, our goal is to learn a generic model of human motion, parameterized by θ, that can quickly and efficiently be personalized for a specific person. We define speed and efficiency requirements in terms of two parameters: computation/iterations (T ) and the number samples required for personalization (K), respectively. We now describe the base architecture, MetaPix training setup, and the implementation details. Base retargeting architecture. We build upon popular video retargeting architectures. Notably, there are two common approaches in literature:1) Learning a transformation from one image to another, conditioned on the pose (Zhou et al., 2019; Balakrishnan et al., 2018) and 2) Learning a mapping from pose to RGB (Pose2Im), like (Chan et al., 2018). Both obtain strong performance and amenable to the speed and efficiency constraints we are interested in. For example in K-shot setting (i.e. to learn a model using K frames), one can train the Pose2Im mapping using the K frames in the former case, or use the CK2 pairs from K frames to learn a transformation function from one of the Algorithm 1 Meta-learning for video re-targeting for the Pose2Im setup. Initialize θD, θG from pretrained weights for iteration = 1, 2, ... do Sample K pose image pairs from the same shot randomly Compute θ̃D, θ̃G = Pix2PixHDTK(θD, θG), for K images and T iterations Update θD = θD − (θ̃D − θD) Update θG = θG − (θ̃G − θG) end for K images to another in the latter case. They are also both compatible with our MetaPix optimization discussed next. Pose2Im (Chan et al., 2018) approaches essentially build upon image-to-image translation methods (Isola et al., 2017; Wang et al., 2018b), where the input is a rendering of the body joints, and the output is an RGB image. The model consists of an encoder-decoder style generator G. It is trained using a combination of perceptual reconstruction losses (Johnson et al., 2016), implemented using an L1 penalty over VGG (Simonyan & Zisserman, 2015) features and discriminator losses, where we train a separate discriminator network D that is trained to differentiate the generated images from real images. The reconstruction loss forces it to be close to the ground truth, potentially leading to blurry outputs. Adding the discriminator helps fix that, as it forces the output onto the manifold of real images. Given its strong performance, we use Pix2PixHD (Wang et al., 2018b) as our base architecture for Pose2Im. For brevity, we skip a complete description of the model architecture, and refer the reader to (Wang et al., 2018b) for more details. Pose Transfer (Balakrishnan et al., 2018; Zhou et al., 2019), on the other hand, takes a source image of a person and a target pose, and generates an image of the source person in that target pose. These approaches typically segment the limbs, transform their position as in the target pose, and generate the target image by combining the transformed limbs and segmented background by using a generative network like a U-Net (Ronneberger et al., 2015). These approaches can leverage learning to move pixels instead of having to generate color and background image from a learned representation. We utilize the Posewarp method (Balakrishnan et al., 2018) as our base Pose Transfer architecture due to available implementation. MetaPix. MetaPix builds upon the base retargeting architecture by optimizing it for few-shot and fast adaptation for personalization. We achieve that by taking inspiration from the literature on few-shot learning, where meta-learning has shown promising results. We use a recently introduced first-order meta-learning technique, Reptile (Nichol et al., 2018). As compared to the more popular technique, MAML (Finn et al., 2017), it is more efficient as it does not compute a second gradient and is amenable to work with arbitrary optimizers as it does not need to backpropagate through the optimization process. Given that GAN architectures are hard to optimize, Reptile suits our purposes of its ability to use Adam (Kingma & Ba, 2015), the default optimizer for Pix2PixHD, as our task optimizer. Figure 2 illustrates the high level idea of our approach, which we describe in detail next. We start with either a Pose2Im or a Pose Transfer trained base model. We then finetune this model as described in Algorithm 1. Note that Pix2PixHD is based on a GAN, so has two network weights to be optimized, the generator (θG) and discriminator (θD). In each meta-iteration, we sample a task: in our case a set of K frames from a new video to personalize to. We then finetune the current model parameter to that video over T iterations, and update the model parameters in the direction of the personalized parameters using a meta learning rate . We optimize both θD and θG jointly at each step. Note that Posewarp employs a more complicated two-stage training procedure, and we metalearn only the first stage (which has no discriminator) for simplicity. Implementation Details. We implement MetaPix for the Pose2Im base model by building upon a public Pix2PixHD implementation1 in PyTorch, and perform all experiments on a 4 TITAN-X or GTX 1080Ti GPU node. We follow the hyperparameter setup as proposed in (Wang et al., 2018b). We represent the pose using a multi-channel heatmap image, and input and output are 512× 512px RGB 1https://github.com/NVIDIA/pix2pixHD/ images. The generator consists of 16 convolutional and deconvolutional layers, and is trained with a equally weighted combination of GAN, Feature Matching, and VGG losses. Initially, we pretrain the model on a large corpus of videos to learn a generic Pose2Im model as described in Section 4. During this pretraining stage, the model is trained on all of the training frames for 10 epochs using learning rate of 0.0002 and batch size of 8 distributed over the 4 GPUs. We experimented with multiple learning rates including 0.2, 0.02, 0.002; however, we observed that higher learning rates caused the training to diverge. When finetuning for personalization, given K frames and a computational budget T , we train the first T2 iterations using a constant learning rate of 0.0002, and the remaining iterations using a linear decay to 0, following (Wang et al., 2018b). The batch size is fixed to 8, and for K < 8, we repeat the frames to get 8 images for the batch. For the metalearning, we set the meta learning rate, = 1 with a linear decay to 0, and train 300 meta-iterations. We also experiment with meta learning rate, = 0.1, however, was much slower to converge. To potentially stabilize metatraining, we experiment with differing numbers of updates to the generator and discriminator during iterations of Alg. 1, as well as simplified objective functions. Recall that the GAN loss adds significant complexity due to the presence of a discriminator that need also be adversarially finetuned. In total, our metalearning takes 1 day of training time on 4 GPUs. For the Pose Transfer base model, we apply MetaPix in a similar fashion on top of Posewarp2, using the author provided pretrained weights. We will release the MetaPix source code for details. 4 EXPERIMENTS We now experimentally evaluate MetaPix. We start by describing the datasets used and evaluation metrics. We then describe our base Pose2Im and Pose Transfer setup, followed by training that model using MetaPix. Finally, we analyze and ablate the various design choices in MetaPix. 4.1 DATASETS AND EVALUATION We train and evaluate our approach on in-the-wild internet videos. Due to the lack of a standard benchmark for such retargeting tasks, we use the dataset as described in (Zhou et al., 2019) as our test set. These are a set of 8 videos downloaded from youtube, each 4-12 minutes long. We refer the reader to Figure 1 in (Zhou et al., 2019) for sample frames from this dataset. Additionally, we collect a set of 10 more dance videos from YouTube (distinct from the above 8), as our pre-training and meta-learning corpus. We provide the list of YouTube video IDs for both in the supplementary. Our models are only trained on these videos, and videos from (Zhou et al., 2019) are only used for personalization (using K frames) and evaluation. Figure 3 shows sample frames from these newly collected videos. Evaluation and Metrics: Similar to (Zhou et al., 2019), we split each of the 8 test videos into a training and test sequence in 0.85:0.15 ratio, and sampleK training and 2000 test frames from the test sequence. We use the same metrics as in (Zhou et al., 2019) for ease of comparison: Mean Squared Error (MSE), Structured Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR). Each of these are averaged over the 2000 test frames from each of the 8 test videos. To compare our baselines and our method, pose retargeting as a task aims to minimize MSE and maximize SSIM and PSNR. 4.2 EVALUATING METAPIX We start by building our baseline retargeting model, based on Pix2PixHD (Wang et al., 2018b; Chan et al., 2018). To get a sense of the upper bound performance of our model, we train the model for each test video with no constraints on T or K, starting from the model pre-trained on our train set. Specifically, we use all the frames from the first 85% of each video, and train it for 10 epochs. We report the performance of this model in first section of Table 1 and show sample generations in second column of Figure 4. Since this model gets strong quantitative and qualitative performance, we stick with it as our base retargeting architecture through the rest of the experiments. We also employ a baseline retargeting model based on Posewarp for evaluation, but we focus on Pose2Im for further experimentation due to its relative simplicity. Now we evaluate the performance of our model in constrained settings, where we want to learn to personalize given a few samples and in a constrained computational budget. Hence, we use a pretrained model on train set and a random model, and we personalize them by finetuning on each test video. As Table 1 shows, applying constraints leads to a drop in performance in all methods, as expected from using only 5 frames finetuned over 20 iterations. Finally, we compare that to the MetaPix model: in that case, we start from the pre-trained model, and do meta-learning on top of those parameters to optimize them for the transfer task as described in Section 3. That leads to a significant improvement over the pretrained model, showing the strength of MetaPix for this task. In Figure 43, we visualize the predictions using the unconstrained model, as well as the constrained models trained using MetaPix and without, i.e. with simple pretraining. It is interesting to note that the meta-learned model is able to adapt to the color of the clothing and the background much better than a pretrained model, given the same frames for personalization. This reinforces MetaPix is a much better initialization for few-shot personalization than directly finetuning from a generic pretrained model. We further explore this quality of coherence in the next section. 4.3 ABLATIONS We now ablate the key design choices in our MetaPix formulation. One of the strengths of our formulation is the explicit control on the supervision provided and computation the model is allowed to perform, and depending on the use-case, those parameters can easily be tweaked. We explore the effect of MetaPix on those parameters next on the Pose2Im base retargeting architecture. Variation in K: We vary the amount of supervision for personalization, K, and evaluate its effect on the metrics in Figure 5. We compare the following models: a) Randomly initialized, b) Pretrained 2https://github.com/balakg/posewarp-cvpr2018 3Video visualization at https://youtu.be/NlUmsd9aU-4 on the train set, c) Trained using MetaPix for each value of K and tested with the same K, and d) Trained using MetaPix forK = 5 and tested at each value ofK. The last one tests the generalizability of MetaPix to different values of K at train and test time. We find that the MetaPix trained models consistently perform better than a simple pretrained model on all metrics. Notably, the model only trained for K = 5 is still able to obtain strong performance at different K values, showing the MetaPix trained model can generalize beyond the specific setup it is optimized for. The gap between the MetaPix trained model and the pretrained model tends to reduce with higher K, which is as expected: more data for personalization would likely reduce the importance of the initialization. However, there is a clear and significant gap for lower values of K, showing that MetaPix is highly effective for retargeting from few samples. In fact, we find that meta-learning is most effective for K = 1, corresponding to the challenging scenario of video-to-image retargeting. Variation in T : Similar to variation in supervision, we experiment with varying the computation, or T , in Figure 6. We experiment with a similar set of baselines as in the case for K, and again observe that the MetaPix model consistently outperforms random initialization or pretraining on all metrics. Also, we see similar generalizability, as the model metatrained for T = 20 is able to perform well for other T values at test time too. The ability for MetaPix to generalize across K and T implies cost-effective strategies for training. The computational cost for training a meta-learner is dominated by fine-tuning, which scales linearly with K and T . Training with smaller values of both can result in significant speedups – up to 10× in our experiments. Variation of meta learning rate : We also experimented with changing the meta learning rate. At = 0.1 (K = 5, T = 200), we obtained SSIM=0.47, similar to what the pretrained model gets. Using our default = 1.0, improves performance to 0.51. Hence, a higher meta learning rate was imperative to see improvements with MetaPix. Only training the generator: We apply Reptile in a GAN setting, where we jointly meta-optimize two networks. We also experimented with freezing one of the networks, specifically the discriminator, to the weights learned during pretraining. For our K = 5, T = 200, = 1.0 setup, we obtain similar performance as optimizing both, suggesting that a ‘universal’ discriminator might suffice for meta-learning on GANs. Visualizing the dynamics of personalization: In order to examine the process of personalization, we visualize models obtained during iterations of finetuning, at 10, 20, 40, 80 and 200 iterations for 5 random test pose-image pairs. We compare both the pretrained and metalearned model, trained for k = 5, T = 200. Figure 74 shows images generated by these intermediate iterations. Both methods learn clothing details and background colors after 20 iterations. Interestingly, MetaPix produces images that are temporally coherent, even upon initialization, while the pretrained baseline produces images whose background and clothing vary with pose. This more coherent initialization appears to translate to more coherent generated images after personalization. 5 CONCLUSION We have explored the task of quickly and efficiently retargeting human actions from one video to another, given a limited number of samples from the target domain. We formalize this as a few-shot personalization problem, where we first learn a generic generative model on large amounts of data, and then specialize it to a small amount of target frames via finetuning. We further propose a novel meta-learning based approach, MetaPix, to learn this generic model in a way that is more amenable to personalization via fine-tuning. To do so, we repurpose a first-order meta-learning algorithm, Reptile, to adversarially meta-optimize both the generator and discriminator of a generative adversarial network. We experiment with it on in-the-wild YouTube videos, and find that MetaPix outperforms widely-used approaches for pretraining, while generating temporally coherent videos. Acknowledgements: This research is based upon work supported in part by NSF Grant 1618903, the Intel Science and Technology Center for Visual Cloud Systems (ISTC-VCS), and Google.
1. What is the focus of the paper, and what problem does it aim to solve? 2. What are the strengths and weaknesses of the proposed approach? 3. Are there any concerns or limitations regarding the method's novelty or its reliance on prior works? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any specific questions or areas the reviewer would like further explanation or discussion on?
Review
Review In this paper, authors propose to address few shot video retargeting, where one should adapt a generic generative model of human actions to a specific person given a few samples of their appearance. Overall, the paper is written with a good structure. I do like the problem setting and motivations in this paper. However, the solution is not quite novel for me. Both base model (Pix2PixHD) and few-shot adaptation (Reptile) come from the previous works. Their combination is somewhat incremental.